cancel
Showing results for 
Search instead for 
Did you mean: 
Create a Post
Logging and Reporting

Have questions about viewing logs with SmartView, generating reports with SmartEvent Event Management, or exporting logs to a SIEM with Log Exporter? This is where to ask!

Rodarcqu
Rodarcqu inside Logging and Reporting 12 hours ago
views 79 2

vlan consumption customer report

 Hello, Am trying to create a custom report with the ip range 10.10.3.0/24 I want to see all that was consumed during a month by that network is that possible?  I do not want take just one sample I want it takes all the resources that consumed the network traffic during a month and show them all of them in a single column not just a sample.    Best Regards,
Danish_Javed1
Danish_Javed1 inside Logging and Reporting 14 hours ago
views 57 2

Checkpoint Integration with Solarwinds

Hello,I have 2 CP 5900 GWs in VSX managed by SmartConsole .. OS is R80.10.I am trying to integrate these with Solarwinds .. i have enabled vs mode in each gateway SNMPv3 configuration. However, i can not see and VS specific data in solarwinds.. nor the VS interfaces .. only Gateway interfaces are visible. am i missing something with the configuration ? Thanks
Ni_c
Ni_c inside Logging and Reporting yesterday
views 3278 8 1

MAC Info is missing in Log Profile

Hi Mates,I have a problem seeing MAC address information in the Logs though MAC address feature is added to the profile. Could anyone help me finding it or if I missed anything in here. Thanks in advance. 
custodio_khho
custodio_khho inside Logging and Reporting yesterday
views 71 1

Meaning of "0" in xlate fields

I have received a logs which record an outgoing connection from my network.The log entry looks like this: "time=1573109409|hostname=xxxxx| .... |version=5|dst=23.227.38.64| .... |action=accept| .... |proto=6|s_port=43953|service=80|service_id=http|src=192.168.10.130|xlatedport=0|xlatedst=0.0.0.0|xlatesport=0|xlatesrc=192.200.135.180|"  I have valid value for xlatesrc meaning NAT is properly done.I have I see xlatesport as '0' which does not make sense.Can someone enlighten when I will see a value of xlatesport=0.Thanks.
Keld_Norman
Keld_Norman inside Logging and Reporting yesterday
views 15661 11 9

How to add a new disk and expand the log file system

Here is a small guide on how to add a new disk >2 TB to your firewall and expand the size of /var/logCheck for if we are running a 64 bit kernel (it is needed for handling >2TB disk sizes)[Expert@firewall:0]# uname -aLinux firewall 2.6.18-92cpx86_64 #1 SMP Sun Jan 21 10:26:26 IST 2018 x86_64 x86_64 x86_64 GNU/LinuxList the disks.. List the disk with fdisk -l or parted -l[Expert@firewall:0]# parted -lModel: Msft Virtual Disk (scsi)Disk /dev/sda: 100GBSector size (logical/physical): 512B/512BPartition Table: msdosNumber Start End Size Type File system Flags 1 32.3kB 313MB 313MB primary ext3 boot 2 313MB 8900MB 8587MB primary linux-swap 3 8900MB 107GB 98.5GB primary lvmModel: Msft Virtual Disk (scsi)Disk /dev/sdb: 34.4GBSector size (logical/physical): 512B/512BPartition Table: msdosNumber Start End Size Type File system Flags 1 65.5kB 34.4GB 34.4GB primary ntfsModel: Msft Virtual Disk (scsi)Disk /dev/sdc: 4295GB <-- THIS IS THE NEW DISK Sector size (logical/physical): 512B/512BPartition Table: gptNumber Start End Size File system Name Flags 1 1049kB 4295GB 4295GB primary lvmNow to the LVM part..Prepare the new disk to be used in LVM using the parted utility[Expert@firewall:0]# parted -s /dev/sdc mklabel gpt[Expert@firewall:0]# parted -s /dev/sdc unit mib mkpart primary 1 100%[Expert@firewall:0]# parted -s /dev/sdc set 1 lvm on[Expert@firewall:0]# # Ask the kernel to re-read the partition table[Expert@firewall:0]# partprobeOne could skip this step of creating a logical volume and just add the "lvm physical disk" created in the next step, but I do it this way to ensure there is information on the disk (about it is used) so when other sysadmins or tools list the disk they can see the partition on the disk instead of a disk that appears empty ..  this might stop them from assume it is "free" to use. Creating the LVM disk and add it to the existing volume group[Expert@firewall:0]# # Tag/prepare/reserve the disk so it can be used in the LVM/VG[Expert@firewall:0]# pvcreate /dev/sdc1[Expert@firewall:0]# # Then add the new LVM disk to the volume group [Expert@firewall:0]# vgextend vg_splat /dev/sdc1 Now I will list the current location of /dev/vg_splat/lv_log (that is where the /var/log file system resides) and see where the data is placed on the two disks I now have in the volume group vg_splat.My goal is to have the log file system reside on the new disk only and not on the OS disk..List the current location of the /var/log file system (the lv_log logical volume)[Expert@firewall:0]# lvs -o +devices # use "pvdisplay -m" for more detailed view  LV         VG       Attr   LSize  ...  Devices        lv_current vg_splat -wi-ao 20.00G      /dev/sda3(0)   lv_log     vg_splat -wi-ao 63.00G      /dev/sda3(640) <- Now we want to move this data to sdc1(The above command shows that lv_log resides on the disk sda partition 3 (/dev/sda3) and we want to move it to the new disk called sdc.)Now lets move the existing /var/log data residing on the same disk as the operative system to speed up the I/O and to ensure we only allocate data for log files on the new disk. We can do this in the background without blocking existing I/O during the move. I would recommend doing this in the background by adding the extra option "--background". That way you could also just disconnect the secure shell session and not need to wait for the command to finish (it could take hours to finish)Move the existing log file system from the system disk to the dedicated logfile disk (shown as a forground process)[Expert@firewall:0]# # NB: I recommend adding the extra option --background to the below command [Expert@firewall:0]# #                       Move [FROM disk] [TO disk][Expert@firewall:0]# pvmove -n /dev/vg_splat/lv_log /dev/sda3 /dev/sdc1  /dev/sda3: Moved: 0.6%  /dev/sda3: Moved: 1.4%  ...  /dev/sda3: Moved: 100% Then verify that the data has been moved correctly..List the location of the logical volumes again on the PV disks.[Expert@firewall:0]#lvs -o +devices  LV         VG       Attr   LSize  Origin Snap%  Move Log Copy%  Devices      lv_current vg_splat -wi-ao 20.00G                               /dev/sda3(0)  lv_log     vg_splat -wi-ao 63.00G                               /dev/sdc1(0) <-- Perfect  (the above command shows that lv_log only resides on /dev/sdc1 now)Now I want to expand the file system on THE NEW DISK only.TIP:When you expand a filesystem on a logical volume you can utilize all the free space by using  "100%FREE" (without the quotation)  instead of my example below where I use  "+3910G" .. so lets expand the logical volume with /dev/sdc1 as an option.Extend the log file system to utilize the new space[Expert@firewall:0]# lvextend -L +3910G /dev/vg_splat/lv_log /dev/sdc1   Extending logical volume lv_log to 3.88 TB  Logical volume lv_log successfully resizedThe we resize the file system to fit the logical volume..Resizing the file system[Expert@firewall:0]# resize2fs /dev/vg_splat/lv_logresize2fs 1.39 (29-May-2006)Filesystem at /dev/vg_splat/lv_log is mounted on /var/log; on-line resizing requiredPerforming an on-line resize of /dev/vg_splat/lv_log to 1041498112 (4k) blocks.The filesystem on /dev/vg_splat/lv_log is now 1041498112 blocks long. Check that data still resides on /dev/sdc1 for lv_logList the LVM / PV location again..[Expert@firewall:0]# lvs -o +devices  LV         VG       Attr   LSize  Origin Snap%  Move Log Copy%  Devices      lv_current vg_splat -wi-ao 20.00G                               /dev/sda3(0)  lv_log     vg_splat -wi-ao  3.88T                               /dev/sdc1(0) An extra check to see the file system size in human format (-h) Verify that the log file system had been expanded[Expert@firewall:0]# df -h /var/logFilesystem                       Size  Used Avail Use% Mounted on/dev/mapper/vg_splat-lv_log      3.8T   40G  3.6T   2% /var/log An extra check to ensure we can write/read the filesystem..Verify that the system can write to the file system[Expert@firewall:0]# touch /var/log/deleteme && ls -al /var/log/deleteme && rm /var/log/deleteme-rw-rw---- 1 admin users 0 Oct 22 13:42 /var/log/deleteme[Expert@firewall:0]# ls -al /var/log/deletemels: /var/log/deleteme: No such file or directory That's it  A "Quickie" to run in expert modeparted -s /dev/sdc mklabel gptparted -s /dev/sdc unit mib mkpart primary 1 100%parted -s /dev/sdc set 1 lvm onpartprobepvcreate /dev/sdc1vgextend vg_splat /dev/sdc1pvmove --background -n /dev/vg_splat/lv_log /dev/sda3 /dev/sdc1 lvextend -L +3910G /dev/vg_splat/lv_log /dev/sdc1 resize2fs /dev/vg_splat/lv_logThe end
Simon_Drapeau
Simon_Drapeau inside Logging and Reporting yesterday
views 1934 9 1

R80.10 take 42 - Smartevent - log_indexer always crashed (often 100% CPU usage)

R80.10 distributed architecture2 X 15600 appliances in VSX cluster VSLS (one VS currently active) .... 7 others comingSMART-1 3150 appliance (management server)Since last week, smartevent is not able to index all the logs at the right time. On heavy load, we got a 3-4 hours delay with the real time.DIAGNOSTIC : I get a verbose error message about log_indexer when the logs are loaded just after the process crashes:A message in the file: /opt/Cprt-R80/log_indexer.elg indicates that an error might have occured. The message is: [log_indexet 13211 4053793680]@fw_name[Date time] SolrClient::Send: connection failure with 127.0.0.1:8210 (culr error: )(curl error number:56). This message indicates that indexer process (log_indexer) coudn^t send the logs to the Log Database engine.I tried to change the priority of this process to help the system to prioritize this function.The default priority for the log_indexer process is 19. I changed the priority of the log_indexer process to a better priority.  The better priority available is 0.renice -n 0 -p <pid number>No significant improvement.Ticket opened, TAC said :  Log_indexer process crashing every 5-20 minutes.Log_indexer.elg shows “I’m sleep” / connection failed to 127.0.0.1All other processes appear to be working correctlyLog_Indexer consuming 100 CPUTROUBLESHOOTING:Referenced previous tickets all point to fresh install.R&D will be our next step.Any hints regarding this issue ? no more idea.regardsSimon 
Juraj_Skalny
Juraj_Skalny inside Logging and Reporting yesterday
views 108 1 4

DNS Trap Protection

Hello Guys, I would like to follow up on the following posts :https://community.checkpoint.com/t5/Logging-and-Reporting/Threat-Prevention-dns-trap-and-resource-categorization/td-p/18638https://community.checkpoint.com/t5/IPS-Anti-Virus-Anti-Bot-Anti/Some-DNS-request-not-block-by-AV-blade/m-p/26588#M784 What we would like to find out is how log firewalls keeps the information about malicious domain in cache?DNS request is changed for Bogus IP by firewall as long as the malicious domain is in cache.The problem we see is that the cache is maybe too short as "Connection was allowed because background classification mode was set. See sk74120 for more information." for the same malicious domain appears in logs too often.We would expect to see this classification event once and then lots of changes to Bogus IP. But that is not the case.There is no documentation on CP covering this info or how to change it. Or we have just overlooked it.In our understanding this way lots of malicious activities are just allowed only because firewall needs to let go of DNS resolution requests because those needs to be classified in the first place over and over again.         Thanks and regards, Juraj
Dan_Zada
inside Logging and Reporting yesterday
views 2516 28 8
Employee+

Log Exporter Filtering

Hello all,I'm happy to inform you that we added a new feature to the log exporter - the ability to filter logs.Starting today, you will be able to configure which logs will exported, based on fields and values, including complex statements.More information, including basic and advanced filtering instructions, can be found in SK122323.If you have any question or comment, let me know.Thanks!Dan.
masher
inside Logging and Reporting Saturday
views 192 3
Employee+

Log Exporter and Elasticsearch

There have been multiple posts requesting information about Check Point's options for integration with an ELK stack for logging. Many of the posts recommend using log exporter over LEA for exporting logs to that system. I have not seen a post that covers configuration of the ELK stack in order to use this exported data.  I put these behind spoiler tags as this is a fairly long winded post as I cover some of the basics before getting into the configuration options.  Screenshots Spoiler   Log data within the "Discover" section of Kibana.   Basic Dashboard     Log data within the "Discover" section of Kibana.   Basic Dashboard     Documentation Spoiler Software Revisions Check Point R80.30 (JHF 76) Elasticsearch 7.4.0 Logstash 7.4.0 Logstash Plugins logstash-input-udp logstash-filter-kv logstash-filter-mutate logstash-output-elasticsearch Kibana 7.4.0 NGINX Arch Linux   Elasticsearch What is Elasticsearch?From Elastic’s website:     Elasticsearch is a distributed, RESTful search and analytics engine capable of solving a growing number of use cases. As the heart of the Elastic Stack, it centrally stores your data so you can discover the expected and uncover the unexpected. Elasticsearch provides an API driven indexing solution for querying and storing large quantities of data. This can be used standalone or clustered solution. Elasticsearch also has additional deployment options providing real-time search integration with other database and storage platforms. In Arch Linux, Elasticsearch configuration files are stored under /etc/elasticsearch. This guide uses the default settings within the elasticsearch.yml file. The default configuration file has reasonable defaults and listens on TCP port 9200 on localhost. The JAVA configuration is stored in the jvm.options file. This is to adjust memory requirements used for the Java processes running Elasticsearch.   Kibana What is Kibana?From Elastic’s website:     Kibana lets you visualize your Elasticsearch data and navigate the Elastic Stack… Kibana is a customizable web interface that interacts with Elasticsearch in order to build dashboard, visualization, or search for stored data. In Arch Linux, the configuration folder is /etc/kibana. The kibana.yml file has a useful set of defaults that I recommend validating. server.port: 5601o This is the service port the server will listen on for inbound HTTP(s) connections. server.host: “localhost”o This will be the server name that kibana will use for this particular instance. Since this document only uses a single instance, the default name was used. server.basePath: “/kibana”o This will be the URL path used when running behind a web service or reverse-proxy. In this example, the requests will be sent to http://host/kibana server.rewriteBasePath: trueo This forces the kibana process to rewrite outbound responses to include the basePath in the URL. The default template documents the additional settings available for Kibana, but it is also available at the following link.o https://www.elastic.co/guide/en/kibana/master/settings.html   Logstash What is Logstash? From Elastic’s website:     Logstash is an open source, server-side data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to your favorite “stash.” (Ours is Elasticsearch, naturally.) Logstash provides a method for receiving content, manipulating that content, and forwarding it on to a backend system for additional use. Logstash can manipulates these streams into acceptable formats for storage and indexing or additional processing of the content. In Arch Linux, the configuration folder is /etc/logstash. The Arch Linux package starts the logstash process and reads the configuration files under /etc/logstash/conf.d. This allows logstash to use multiple input sources, filters, and outputs. For example, logstash could be configured as a syslog service and output specific content to an Elasticsearch cluster on the network. It could also have another configuration file that listens on port 5044 for separate data to export into a different Elasticsearch cluster. Logstash Plugins Depending on the Linux distribution, several logstash plugins might not be available from the distribution file repository. In order to add additional plugins, the use of git is required to import them from the Logstash plugin repository. The logstash-plugins repository are available online at https://github.com/logstash-plugins/ and their installation process is documented on that site.   NGINX NGINX is a web and reverse proxy server. It also has other capabilities such as load balancing and web application firewall. The configuration for nginx is located in the /etc/nginx directory. Since NGINX can have a wide array of configurations, only the configuration used to allow access to Kibana is documented below. The configuration below accepts connections to /kibana and forwards them to the localhost:5601 service. For additional configuration options for NGINX, see https://docs.nginx.com. Snippet from /etc/nginx/nginx.conf http {   include mime.types;   sendfile on;   keepalive_timeout 65;   ## This provides a reference to the Elasticsearch service listening on localhost:9200   upstream kibana {       server 127.0.0.1:5601;       keepalive 15;    }server {    listen 80;    server_name _;    location / {        root /usr/share/nginx/html;        index index.html index.htm;    }    location /kibana {        proxy_pass http://kibana;    }    error_page 500 502 503 504 /50x.html;    location = /50x.html {        root /usr/share/nginx/html;    }  }}   Check Point Log Exporter Check Point Log Exporter documentation can be found in SK article 122323 on Support Center. The configuration used in this document is very simple. The logstash configuration file is expecting the syslog messages to use the Splunk format. The Splunk format provides field delineation using the | character. This provides an easy option for Logstash’s KV filter to use in order to split the log fields. [Expert@vMgmt01:0]# cp_log_export add name lab01 target-server 10.99.99.11 target-port 514 protocol udp format splunk read-mode semi-unified Export settings for lab01 has been added successfully To apply the changes run: cp_log_export restart name lab01 [Expert@vMgmt01:0]#   Logstash Configuration There is a single configuration file defined for logstash in this example. Logstash will attempt to load all YML files in the configuration directory. Each file could have a different set of inputs, filters, and outputs.  The location of the YAML file might differ depending on the distribution.In this example, there are three primary sections: input, filter, and output. The input configuration tells the logstash process which plugins to run to receive content from external sources. Our example uses the UDP input plugin (logstash-input-udp) and is configured to act as a syslog service. Logstash has many other options for input types and content codecs. Additional information can be found by accessing the logstash Input plugin documentation at https://www.elastic.co/guide/en/logstash/current/input-plugins.html. input {    udp {        port => 514        codec => plain         type => syslog    }}   Logstash filters match, label, edit, and make decisions on content before it passes into Elasticsearch (ES). The filter section is where the patterns and labels are defined. In our example, logstash processes the syslog data through two separate filters. The kv filter is used to automatically assign labels to the received logs. Since the messages sent via log_exporter are in a <key>=<value> format, the kv filter was chosen. This provides a simpler mechanism than using the grok or dissect plugins for assigning those values. filter {      if [type] == "syslog" {        kv {             allow_duplicate_values => false             recursive => false             field_split => "\|"        }        mutate {            # Fields beginning with underscore are not supported by ES/Kibana, so rename them.             rename => { "__nsons" => "nsons" }             rename => { "__p_dport" => "p_dport" }             rename => { "__pos" => "pos" }             # Not necessary, just a field name preference.             rename => { "originsicname" => "sicname" }              # Example of removing specific fields             # remove_field => [ "connection_luuid", "loguid" ]              # String substitution             # Strip the O\=.*$ and the ^CN= in the field.             gsub => [                  "sicname", "CN\\=", "",                  "sicname", ",O\\=.*$",""             ]        }    }} The mutate plugin (logstash-filter-mutate) is used to manipulate the data as needed.  In the configuration above, the originsicname field is renamed to sicname. Additional fields can be dropped using the remove_field configuration.The configuration can be validated using the “-t” parameter when launching logstash. Configuration errors will be displayed and include the line numbers for the error. /usr/share/logstash/bin/logstash -t -f /etc/logstash/conf.d/checkpoint.yml   Kibana - Basic Configuration The default settings for Kibana are used in this document, so there are no additional configuration steps necessary. The NGINX configuration used in the earlier section will pass the requests to http://hostname/kibana to the http://localhost:5601 service. After opening the management page, Kibana needs to be configured to use the data contained within Elasticsearch. Go to Management -> Index Patterns  Select Create Pattern and add a pattern that will highlight the logstash indices and click next. The next step will ask for the Time filter setting. Select the @timestamp default and click “Create Index Pattern”   Once this step has been completed, Kibana can now present available data from Elasticsearch. Software Revisions Check Point R80.30 (JHF 76) Elasticsearch 7.4.0 Logstash 7.4.0 Logstash Plugins logstash-input-udp logstash-filter-kv logstash-filter-mutate logstash-output-elasticsearch Kibana 7.4.0 NGINX Arch Linux   Elasticsearch What is Elasticsearch?From Elastic’s website:     Elasticsearch is a distributed, RESTful search and analytics engine capable of solving a growing number of use cases. As the heart of the Elastic Stack, it centrally stores your data so you can discover the expected and uncover the unexpected. Elasticsearch provides an API driven indexing solution for querying and storing large quantities of data. This can be used standalone or clustered solution. Elasticsearch also has additional deployment options providing real-time search integration with other database and storage platforms. In Arch Linux, Elasticsearch configuration files are stored under /etc/elasticsearch. This guide uses the default settings within the elasticsearch.yml file. The default configuration file has reasonable defaults and listens on TCP port 9200 on localhost. The JAVA configuration is stored in the jvm.options file. This is to adjust memory requirements used for the Java processes running Elasticsearch.   Kibana What is Kibana?From Elastic’s website:     Kibana lets you visualize your Elasticsearch data and navigate the Elastic Stack… Kibana is a customizable web interface that interacts with Elasticsearch in order to build dashboard, visualization, or search for stored data. In Arch Linux, the configuration folder is /etc/kibana. The kibana.yml file has a useful set of defaults that I recommend validating. server.port: 5601o This is the service port the server will listen on for inbound HTTP(s) connections. server.host: “localhost”o This will be the server name that kibana will use for this particular instance. Since this document only uses a single instance, the default name was used. server.basePath: “/kibana”o This will be the URL path used when running behind a web service or reverse-proxy. In this example, the requests will be sent to http://host/kibana server.rewriteBasePath: trueo This forces the kibana process to rewrite outbound responses to include the basePath in the URL. The default template documents the additional settings available for Kibana, but it is also available at the following link.o https://www.elastic.co/guide/en/kibana/master/settings.html   Logstash What is Logstash? From Elastic’s website:     Logstash is an open source, server-side data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to your favorite “stash.” (Ours is Elasticsearch, naturally.) Logstash provides a method for receiving content, manipulating that content, and forwarding it on to a backend system for additional use. Logstash can manipulates these streams into acceptable formats for storage and indexing or additional processing of the content. In Arch Linux, the configuration folder is /etc/logstash. The Arch Linux package starts the logstash process and reads the configuration files under /etc/logstash/conf.d. This allows logstash to use multiple input sources, filters, and outputs. For example, logstash could be configured as a syslog service and output specific content to an Elasticsearch cluster on the network. It could also have another configuration file that listens on port 5044 for separate data to export into a different Elasticsearch cluster. Logstash Plugins Depending on the Linux distribution, several logstash plugins might not be available from the distribution file repository. In order to add additional plugins, the use of git is required to import them from the Logstash plugin repository. The logstash-plugins repository are available online at https://github.com/logstash-plugins/ and their installation process is documented on that site.   NGINX NGINX is a web and reverse proxy server. It also has other capabilities such as load balancing and web application firewall. The configuration for nginx is located in the /etc/nginx directory. Since NGINX can have a wide array of configurations, only the configuration used to allow access to Kibana is documented below. The configuration below accepts connections to /kibana and forwards them to the localhost:5601 service. For additional configuration options for NGINX, see https://docs.nginx.com. Snippet from /etc/nginx/nginx.conf http {   include mime.types;   sendfile on;   keepalive_timeout 65;   ## This provides a reference to the Elasticsearch service listening on localhost:9200   upstream kibana {       server 127.0.0.1:5601;       keepalive 15;    }server {    listen 80;    server_name _;    location / {        root /usr/share/nginx/html;        index index.html index.htm;    }    location /kibana {        proxy_pass http://kibana;    }    error_page 500 502 503 504 /50x.html;    location = /50x.html {        root /usr/share/nginx/html;    }  }}   Check Point Log Exporter Check Point Log Exporter documentation can be found in SK article 122323 on Support Center. The configuration used in this document is very simple. The logstash configuration file is expecting the syslog messages to use the Splunk format. The Splunk format provides field delineation using the | character. This provides an easy option for Logstash’s KV filter to use in order to split the log fields. [Expert@vMgmt01:0]# cp_log_export add name lab01 target-server 10.99.99.11 target-port 514 protocol udp format splunk read-mode semi-unified Export settings for lab01 has been added successfully To apply the changes run: cp_log_export restart name lab01 [Expert@vMgmt01:0]#   Logstash Configuration There is a single configuration file defined for logstash in this example. Logstash will attempt to load all YML files in the configuration directory. Each file could have a different set of inputs, filters, and outputs.  The location of the YAML file might differ depending on the distribution.In this example, there are three primary sections: input, filter, and output. The input configuration tells the logstash process which plugins to run to receive content from external sources. Our example uses the UDP input plugin (logstash-input-udp) and is configured to act as a syslog service. Logstash has many other options for input types and content codecs. Additional information can be found by accessing the logstash Input plugin documentation at https://www.elastic.co/guide/en/logstash/current/input-plugins.html. input {    udp {        port => 514        codec => plain         type => syslog    }}   Logstash filters match, label, edit, and make decisions on content before it passes into Elasticsearch (ES). The filter section is where the patterns and labels are defined. In our example, logstash processes the syslog data through two separate filters. The kv filter is used to automatically assign labels to the received logs. Since the messages sent via log_exporter are in a <key>=<value> format, the kv filter was chosen. This provides a simpler mechanism than using the grok or dissect plugins for assigning those values. filter {      if [type] == "syslog" {        kv {             allow_duplicate_values => false             recursive => false             field_split => "\|"        }        mutate {            # Fields beginning with underscore are not supported by ES/Kibana, so rename them.             rename => { "__nsons" => "nsons" }             rename => { "__p_dport" => "p_dport" }             rename => { "__pos" => "pos" }             # Not necessary, just a field name preference.             rename => { "originsicname" => "sicname" }              # Example of removing specific fields             # remove_field => [ "connection_luuid", "loguid" ]              # String substitution             # Strip the O\=.*$ and the ^CN= in the field.             gsub => [                  "sicname", "CN\\=", "",                  "sicname", ",O\\=.*$",""             ]        }    }} The mutate plugin (logstash-filter-mutate) is used to manipulate the data as needed.  In the configuration above, the originsicname field is renamed to sicname. Additional fields can be dropped using the remove_field configuration.The configuration can be validated using the “-t” parameter when launching logstash. Configuration errors will be displayed and include the line numbers for the error. /usr/share/logstash/bin/logstash -t -f /etc/logstash/conf.d/checkpoint.yml   Kibana - Basic Configuration The default settings for Kibana are used in this document, so there are no additional configuration steps necessary. The NGINX configuration used in the earlier section will pass the requests to http://hostname/kibana to the http://localhost:5601 service. After opening the management page, Kibana needs to be configured to use the data contained within Elasticsearch. Go to Management -> Index Patterns  Select Create Pattern and add a pattern that will highlight the logstash indices and click next. The next step will ask for the Time filter setting. Select the @timestamp default and click “Create Index Pattern”   Once this step has been completed, Kibana can now present available data from Elasticsearch.  
Qixing_Cao
Qixing_Cao inside Logging and Reporting Thursday
views 135

Trouble with Generating over 30 days' Checkup Report in Large Environment

Respected Community ExpertsHi. I have met some trouble in a recent Checkup event. In customer's environment, the log files size is near 1-2GB a day, the duration of Checkup is over 2 months, so we need a Checkup Report of 60 days. But we find that it is impossible to generate the report, because all data in the preview window shows "query failed" after a short time. Even that, when trying export the report by using a scheduled task, it shows "failed" a few minutes later.The deployment method is Distributed (One 23800 gateway to receive monitor traffic, one VM SmartCenter for management and Logging). And VM SmartCenter has 24 cores, 24GB memory (The usage of the system resource is not high in usual as we observed, and disk space usage is about only 25%). The activated blades are Firewall (no track action), IPS, Anti-Bot, Anti-virus, Threat Emulation. Has anyone met similar situation like this? 
rajesh_s
rajesh_s inside Logging and Reporting Thursday
views 21891 17 8

Checkpoint Gateways are not sending the logs to Checkpoint management server

Hi All,We are using Checkpoint R77.30 firewall, Gateways are not sending the logs to Checkpoint management server, Is anyone has similar issue?.
BigPAM
BigPAM inside Logging and Reporting Thursday
views 239 2

Compliance vs AlgoSec

Is it unfair of me to compare the Compliance with a policy audit tool such as AlgoSec Firewall Analyser? I am trying to create custom rules to find specific flows (i.e.traffic that originates on our internal network that goes to the internet bypassing the proxy). Compliance blade seems to be object based rather than breaking down the policy into base metadata as some of the well firewall audit tools do. I cant seem to get the above example to work  (because Compliance blade seems to be looking for specific objects ???).Are there an sources of Compliance Blade documentation other than the videos or the R80.1 ARTG? They don't seem to go deep enough for me to figure this out for myself.Thanks.
Niels_Poulsen
inside Logging and Reporting a week ago
views 955 5 1
Employee+

0 Gateways or server reporting logs

Just reinstalled my setup, with build 94, after this I have 0 gateways or servers listed in the logs area. How can I debug this?
Rodrigo_Castell
Rodrigo_Castell inside Logging and Reporting a week ago
views 321 5

LogExporter IPS logs to ArcSight CEF

Hello, Is anyone sending logs to ArcSight and is using the IPS blade? Im having an issue where these specific logs are not sending the destination address.This only happens with IPS events, the rest of the blades do send the fields I need. This is on R80.10 latest JHF smartevent and gateways. 
Aitor_Carazo
Aitor_Carazo inside Logging and Reporting a week ago
views 173

[Smartevent] Mil Alerts Source and destination empty only with custom IOCs

Hi Checkmates,I have recently imported some custom IOCs. I have configured the Smartevent to send alerts with Virus and Bot events.When an event matches a custom IOC, the mail alert got the source and destination empty.This only happen with all the custom IOCs, with other events works fine.Regards