cancel
Showing results for 
Search instead for 
Did you mean: 
Create a Post
Logging and Reporting

Have questions about viewing logs with SmartView, generating reports with SmartEvent Event Management, or exporting logs to a SIEM with Log Exporter? This is where to ask!

JS_FW
JS_FW inside Logging and Reporting yesterday
views 23

Log Exporter monitoring, and position "catch up"

Hi there,We've recently migrated to Log Exporter from OPSEC LEA (R80.20 SMS), exporting to LogRhythm. For the most part things are working great. Unfortunately, our IR team is reporting gaps in our logs. Wanted to know if there was a way to monitor for when the service restarts, and if there is a way to "catch up" from the last transmitted log point. There was a position marker they were able to use when they were pulling from OPSEC LEA, but now that we are pushing, they can't use that.Any help or guidance appreciated.
masher
inside Logging and Reporting yesterday
views 483 4 1
Employee+

Log Exporter and Elasticsearch

There have been multiple posts requesting information about Check Point's options for integration with an ELK stack for logging. Many of the posts recommend using log exporter over LEA for exporting logs to that system. I have not seen a post that covers configuration of the ELK stack in order to use this exported data.  I put these behind spoiler tags as this is a fairly long winded post as I cover some of the basics before getting into the configuration options.  Screenshots Spoiler   Log data within the "Discover" section of Kibana.   Basic Dashboard     Log data within the "Discover" section of Kibana.   Basic Dashboard     Documentation Spoiler Software Revisions Check Point R80.30 (JHF 76) Elasticsearch 7.4.0 Logstash 7.4.0 Logstash Plugins logstash-input-udp logstash-filter-kv logstash-filter-mutate logstash-output-elasticsearch Kibana 7.4.0 NGINX Arch Linux   Elasticsearch What is Elasticsearch?From Elastic’s website:     Elasticsearch is a distributed, RESTful search and analytics engine capable of solving a growing number of use cases. As the heart of the Elastic Stack, it centrally stores your data so you can discover the expected and uncover the unexpected. Elasticsearch provides an API driven indexing solution for querying and storing large quantities of data. This can be used standalone or clustered solution. Elasticsearch also has additional deployment options providing real-time search integration with other database and storage platforms. In Arch Linux, Elasticsearch configuration files are stored under /etc/elasticsearch. This guide uses the default settings within the elasticsearch.yml file. The default configuration file has reasonable defaults and listens on TCP port 9200 on localhost. The JAVA configuration is stored in the jvm.options file. This is to adjust memory requirements used for the Java processes running Elasticsearch.   Kibana What is Kibana?From Elastic’s website:     Kibana lets you visualize your Elasticsearch data and navigate the Elastic Stack… Kibana is a customizable web interface that interacts with Elasticsearch in order to build dashboard, visualization, or search for stored data. In Arch Linux, the configuration folder is /etc/kibana. The kibana.yml file has a useful set of defaults that I recommend validating. server.port: 5601o This is the service port the server will listen on for inbound HTTP(s) connections. server.host: “localhost”o This will be the server name that kibana will use for this particular instance. Since this document only uses a single instance, the default name was used. server.basePath: “/kibana”o This will be the URL path used when running behind a web service or reverse-proxy. In this example, the requests will be sent to http://host/kibana server.rewriteBasePath: trueo This forces the kibana process to rewrite outbound responses to include the basePath in the URL. The default template documents the additional settings available for Kibana, but it is also available at the following link.o https://www.elastic.co/guide/en/kibana/master/settings.html   Logstash What is Logstash? From Elastic’s website:     Logstash is an open source, server-side data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to your favorite “stash.” (Ours is Elasticsearch, naturally.) Logstash provides a method for receiving content, manipulating that content, and forwarding it on to a backend system for additional use. Logstash can manipulates these streams into acceptable formats for storage and indexing or additional processing of the content. In Arch Linux, the configuration folder is /etc/logstash. The Arch Linux package starts the logstash process and reads the configuration files under /etc/logstash/conf.d. This allows logstash to use multiple input sources, filters, and outputs. For example, logstash could be configured as a syslog service and output specific content to an Elasticsearch cluster on the network. It could also have another configuration file that listens on port 5044 for separate data to export into a different Elasticsearch cluster. Logstash Plugins Depending on the Linux distribution, several logstash plugins might not be available from the distribution file repository. In order to add additional plugins, the use of git is required to import them from the Logstash plugin repository. The logstash-plugins repository are available online at https://github.com/logstash-plugins/ and their installation process is documented on that site.   NGINX NGINX is a web and reverse proxy server. It also has other capabilities such as load balancing and web application firewall. The configuration for nginx is located in the /etc/nginx directory. Since NGINX can have a wide array of configurations, only the configuration used to allow access to Kibana is documented below. The configuration below accepts connections to /kibana and forwards them to the localhost:5601 service. For additional configuration options for NGINX, see https://docs.nginx.com. Snippet from /etc/nginx/nginx.conf http {   include mime.types;   sendfile on;   keepalive_timeout 65;   ## This provides a reference to the Elasticsearch service listening on localhost:9200   upstream kibana {       server 127.0.0.1:5601;       keepalive 15;    }server {    listen 80;    server_name _;    location / {        root /usr/share/nginx/html;        index index.html index.htm;    }    location /kibana {        proxy_pass http://kibana;    }    error_page 500 502 503 504 /50x.html;    location = /50x.html {        root /usr/share/nginx/html;    }  }}   Check Point Log Exporter Check Point Log Exporter documentation can be found in SK article 122323 on Support Center. The configuration used in this document is very simple. The logstash configuration file is expecting the syslog messages to use the Splunk format. The Splunk format provides field delineation using the | character. This provides an easy option for Logstash’s KV filter to use in order to split the log fields. [Expert@vMgmt01:0]# cp_log_export add name lab01 target-server 10.99.99.11 target-port 514 protocol udp format splunk read-mode semi-unified Export settings for lab01 has been added successfully To apply the changes run: cp_log_export restart name lab01 [Expert@vMgmt01:0]#   Logstash Configuration There is a single configuration file defined for logstash in this example. Logstash will attempt to load all YML files in the configuration directory. Each file could have a different set of inputs, filters, and outputs.  The location of the YAML file might differ depending on the distribution.In this example, there are three primary sections: input, filter, and output. The input configuration tells the logstash process which plugins to run to receive content from external sources. Our example uses the UDP input plugin (logstash-input-udp) and is configured to act as a syslog service. Logstash has many other options for input types and content codecs. Additional information can be found by accessing the logstash Input plugin documentation at https://www.elastic.co/guide/en/logstash/current/input-plugins.html. input {    udp {        port => 514        codec => plain         type => syslog    }}   Logstash filters match, label, edit, and make decisions on content before it passes into Elasticsearch (ES). The filter section is where the patterns and labels are defined. In our example, logstash processes the syslog data through two separate filters. The kv filter is used to automatically assign labels to the received logs. Since the messages sent via log_exporter are in a <key>=<value> format, the kv filter was chosen. This provides a simpler mechanism than using the grok or dissect plugins for assigning those values. filter {      if [type] == "syslog" {        kv {             allow_duplicate_values => false             recursive => false             field_split => "\|"        }        mutate {            # Fields beginning with underscore are not supported by ES/Kibana, so rename them.             rename => { "__nsons" => "nsons" }             rename => { "__p_dport" => "p_dport" }             rename => { "__pos" => "pos" }             # Not necessary, just a field name preference.             rename => { "originsicname" => "sicname" }              # Example of removing specific fields             # remove_field => [ "connection_luuid", "loguid" ]              # String substitution             # Strip the O\=.*$ and the ^CN= in the field.             gsub => [                  "sicname", "CN\\=", "",                  "sicname", ",O\\=.*$",""             ]        }    }} The mutate plugin (logstash-filter-mutate) is used to manipulate the data as needed.  In the configuration above, the originsicname field is renamed to sicname. Additional fields can be dropped using the remove_field configuration.The configuration can be validated using the “-t” parameter when launching logstash. Configuration errors will be displayed and include the line numbers for the error. /usr/share/logstash/bin/logstash -t -f /etc/logstash/conf.d/checkpoint.yml   Kibana - Basic Configuration The default settings for Kibana are used in this document, so there are no additional configuration steps necessary. The NGINX configuration used in the earlier section will pass the requests to http://hostname/kibana to the http://localhost:5601 service. After opening the management page, Kibana needs to be configured to use the data contained within Elasticsearch. Go to Management -> Index Patterns  Select Create Pattern and add a pattern that will highlight the logstash indices and click next. The next step will ask for the Time filter setting. Select the @timestamp default and click “Create Index Pattern”   Once this step has been completed, Kibana can now present available data from Elasticsearch. Software Revisions Check Point R80.30 (JHF 76) Elasticsearch 7.4.0 Logstash 7.4.0 Logstash Plugins logstash-input-udp logstash-filter-kv logstash-filter-mutate logstash-output-elasticsearch Kibana 7.4.0 NGINX Arch Linux   Elasticsearch What is Elasticsearch?From Elastic’s website:     Elasticsearch is a distributed, RESTful search and analytics engine capable of solving a growing number of use cases. As the heart of the Elastic Stack, it centrally stores your data so you can discover the expected and uncover the unexpected. Elasticsearch provides an API driven indexing solution for querying and storing large quantities of data. This can be used standalone or clustered solution. Elasticsearch also has additional deployment options providing real-time search integration with other database and storage platforms. In Arch Linux, Elasticsearch configuration files are stored under /etc/elasticsearch. This guide uses the default settings within the elasticsearch.yml file. The default configuration file has reasonable defaults and listens on TCP port 9200 on localhost. The JAVA configuration is stored in the jvm.options file. This is to adjust memory requirements used for the Java processes running Elasticsearch.   Kibana What is Kibana?From Elastic’s website:     Kibana lets you visualize your Elasticsearch data and navigate the Elastic Stack… Kibana is a customizable web interface that interacts with Elasticsearch in order to build dashboard, visualization, or search for stored data. In Arch Linux, the configuration folder is /etc/kibana. The kibana.yml file has a useful set of defaults that I recommend validating. server.port: 5601o This is the service port the server will listen on for inbound HTTP(s) connections. server.host: “localhost”o This will be the server name that kibana will use for this particular instance. Since this document only uses a single instance, the default name was used. server.basePath: “/kibana”o This will be the URL path used when running behind a web service or reverse-proxy. In this example, the requests will be sent to http://host/kibana server.rewriteBasePath: trueo This forces the kibana process to rewrite outbound responses to include the basePath in the URL. The default template documents the additional settings available for Kibana, but it is also available at the following link.o https://www.elastic.co/guide/en/kibana/master/settings.html   Logstash What is Logstash? From Elastic’s website:     Logstash is an open source, server-side data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to your favorite “stash.” (Ours is Elasticsearch, naturally.) Logstash provides a method for receiving content, manipulating that content, and forwarding it on to a backend system for additional use. Logstash can manipulates these streams into acceptable formats for storage and indexing or additional processing of the content. In Arch Linux, the configuration folder is /etc/logstash. The Arch Linux package starts the logstash process and reads the configuration files under /etc/logstash/conf.d. This allows logstash to use multiple input sources, filters, and outputs. For example, logstash could be configured as a syslog service and output specific content to an Elasticsearch cluster on the network. It could also have another configuration file that listens on port 5044 for separate data to export into a different Elasticsearch cluster. Logstash Plugins Depending on the Linux distribution, several logstash plugins might not be available from the distribution file repository. In order to add additional plugins, the use of git is required to import them from the Logstash plugin repository. The logstash-plugins repository are available online at https://github.com/logstash-plugins/ and their installation process is documented on that site.   NGINX NGINX is a web and reverse proxy server. It also has other capabilities such as load balancing and web application firewall. The configuration for nginx is located in the /etc/nginx directory. Since NGINX can have a wide array of configurations, only the configuration used to allow access to Kibana is documented below. The configuration below accepts connections to /kibana and forwards them to the localhost:5601 service. For additional configuration options for NGINX, see https://docs.nginx.com. Snippet from /etc/nginx/nginx.conf http {   include mime.types;   sendfile on;   keepalive_timeout 65;   ## This provides a reference to the Elasticsearch service listening on localhost:9200   upstream kibana {       server 127.0.0.1:5601;       keepalive 15;    }server {    listen 80;    server_name _;    location / {        root /usr/share/nginx/html;        index index.html index.htm;    }    location /kibana {        proxy_pass http://kibana;    }    error_page 500 502 503 504 /50x.html;    location = /50x.html {        root /usr/share/nginx/html;    }  }}   Check Point Log Exporter Check Point Log Exporter documentation can be found in SK article 122323 on Support Center. The configuration used in this document is very simple. The logstash configuration file is expecting the syslog messages to use the Splunk format. The Splunk format provides field delineation using the | character. This provides an easy option for Logstash’s KV filter to use in order to split the log fields. [Expert@vMgmt01:0]# cp_log_export add name lab01 target-server 10.99.99.11 target-port 514 protocol udp format splunk read-mode semi-unified Export settings for lab01 has been added successfully To apply the changes run: cp_log_export restart name lab01 [Expert@vMgmt01:0]#   Logstash Configuration There is a single configuration file defined for logstash in this example. Logstash will attempt to load all YML files in the configuration directory. Each file could have a different set of inputs, filters, and outputs.  The location of the YAML file might differ depending on the distribution.In this example, there are three primary sections: input, filter, and output. The input configuration tells the logstash process which plugins to run to receive content from external sources. Our example uses the UDP input plugin (logstash-input-udp) and is configured to act as a syslog service. Logstash has many other options for input types and content codecs. Additional information can be found by accessing the logstash Input plugin documentation at https://www.elastic.co/guide/en/logstash/current/input-plugins.html. input {    udp {        port => 514        codec => plain         type => syslog    }}   Logstash filters match, label, edit, and make decisions on content before it passes into Elasticsearch (ES). The filter section is where the patterns and labels are defined. In our example, logstash processes the syslog data through two separate filters. The kv filter is used to automatically assign labels to the received logs. Since the messages sent via log_exporter are in a <key>=<value> format, the kv filter was chosen. This provides a simpler mechanism than using the grok or dissect plugins for assigning those values. filter {      if [type] == "syslog" {        kv {             allow_duplicate_values => false             recursive => false             field_split => "\|"        }        mutate {            # Fields beginning with underscore are not supported by ES/Kibana, so rename them.             rename => { "__nsons" => "nsons" }             rename => { "__p_dport" => "p_dport" }             rename => { "__pos" => "pos" }             # Not necessary, just a field name preference.             rename => { "originsicname" => "sicname" }              # Example of removing specific fields             # remove_field => [ "connection_luuid", "loguid" ]              # String substitution             # Strip the O\=.*$ and the ^CN= in the field.             gsub => [                  "sicname", "CN\\=", "",                  "sicname", ",O\\=.*$",""             ]        }    }} The mutate plugin (logstash-filter-mutate) is used to manipulate the data as needed.  In the configuration above, the originsicname field is renamed to sicname. Additional fields can be dropped using the remove_field configuration.The configuration can be validated using the “-t” parameter when launching logstash. Configuration errors will be displayed and include the line numbers for the error. /usr/share/logstash/bin/logstash -t -f /etc/logstash/conf.d/checkpoint.yml   Kibana - Basic Configuration The default settings for Kibana are used in this document, so there are no additional configuration steps necessary. The NGINX configuration used in the earlier section will pass the requests to http://hostname/kibana to the http://localhost:5601 service. After opening the management page, Kibana needs to be configured to use the data contained within Elasticsearch. Go to Management -> Index Patterns  Select Create Pattern and add a pattern that will highlight the logstash indices and click next. The next step will ask for the Time filter setting. Select the @timestamp default and click “Create Index Pattern”   Once this step has been completed, Kibana can now present available data from Elasticsearch.  
Kenneth_Greger1
Kenneth_Greger1 inside Logging and Reporting yesterday
views 2525 7 2

Log Indexer crashing - SmartLog not working

HiWe have been struggling, since before Christmas, with our R80.10 SmartCenter server (R80.10 - Build 439).Every now and then (after a few hours and/or days) the SmartLog is not working. Meaning that it is not possible to view the log files in the SmartDashboard GUI client (SmartView).We can see that the SmartCenter is receiving the logs, but the INDEXER service is crashing.A workaround has been to do evstop.Then look into $INDEXERDIR/log/log_indexer.elg and find the offending log file that the INDEXER process is not able to parse. Typically the file name it will show up right before an entry that reads:log_indexer 30145 3804232592] Jan 16:05:41] Start reading 127.0.0.1:2019-01-02_151203_1.log [1546423998] at position 5738761 [2 Jan 16:05:41] CBinaryLogFile::ReplaceFileToTableMemStringID: error - can't get mem string id[2 Jan 16:05:41] CBinaryLogFile::ReplaceTableStringId error: couldn't get file string_id, will set to default NULL VALUE[2 Jan 16:05:41] CBinaryLogFile::ReplaceFileToTableMemStringID: error - can't get mem string id[2 Jan 16:05:41] CBinaryLogFile::ReplaceTableStringId error: couldn't get file string_id, will set to default NULL VALUEThen we edit the file $INDEXERDIR/data/FetchedFiles, mark the offending file as finished - and the INDEXER will move on to the next log file. This procedure is described in sk116117.In some cases it does not indicate which files is problematic at all. What we do then is to evstop;evstart - and (usually) after some time it will show the offending log file.We have tried to re-install SmartCenter, but the problem persists.Both our vendor and CheckPoint is involved in the case, but so far they have not come up with a solution.Any input is greatly appreciated./Kenneth
Dilian_Chernev
Dilian_Chernev inside Logging and Reporting Thursday
views 77 3

Duplicated logs in Smartconsole with Management HA

Hello community 🙂I have the a setup with two management servers R80.20 - Primary and Secondary, and all gateways are logging to both servers simultaneously.In smartconsole logs, there are two rows displayed for each connection , and only difference is "Log Server Origin" field - first log is from primary management and second is from secondary management.Is this a normal behaviour? Can it be tuned somehow to show only logs from primary management? Thanks
MiteshAgrawal15
MiteshAgrawal15 inside Logging and Reporting Thursday
views 185 5

Audit events not received from Checkpoint R80.10 using Log Exporter Solution

Hi Team,We are facing issues in getting audit logs from Checkpoint R80.10. We have MicroFocus ArcSight in our environment.We are receiving the traffic logs without any issues and the same is getting parsed properly. But the audit logs which we receive in "SmartConsole" is parsed as "Log" and nothing much is captured in raw event. At the same time from other firewall which are on R80.20 we are receiving "Log in", " Log Out", "Modify Rule" etc events and username and other details are captured.The targetconfiguration.xml settings file is set as "all"<log_types></log_types><!--all[default]|log|audit/--> Also, the output of cp_log_export show command on the management server is:name: ArcSightLogenabled: truetarget-server: Agent Server IPtarget-port: 514protocol: udpformat: cefread-mode: semi-unified In one of the thread I saw that the domain-server argument needs to be provided while configuring the log export destination. Can someone please check the above config and tell if it was added or not?Please help in rectifying the issue. What configuration we need to use in order to receive audit logs from R80.10 using Log exporter solution.TIARegards,Mitesh Agrawal
MIchael_Hovis
MIchael_Hovis inside Logging and Reporting Wednesday
views 1073 6 1

Export Logs To LogRhythm using Log Exporter

Has anyone used Log Exporter to export logs to LogRhythm?  I have a Check Point managment server that is also the log server running R80.20.  I've configured Log Exporter and am sending logs to LogRhythm using the CEF format.  However, LogRhythm says they cannot parse the logs.  Has anyone else run into this problem and found a solution?Thanks.
David_Spencer
David_Spencer inside Logging and Reporting Wednesday
views 217 10

High I/O queue size and wait

I'm trying to do some optimization on my 5400 units, as we are usually at over 80% CPU and memory utilization, even hitting swap memory. While doing some digging in cpview I found that my I/O average changes quickly between about a 1000 wait and 7,000,000,000,000 wait. Absurdly large. also very large queue size of ~ 80,000,000What should I do with this information? 
Michael_Clarke
Michael_Clarke inside Logging and Reporting Wednesday
views 106 1

hung administrator session

in smart dashboard I see two administrator sessions (smart update) cannot delete/disconnect, at the CLI on the MDS see no TCP connection for sameis there a way to force that disconnect, or is this some state info not cleared from a file/database and if so where to look ?

OPSEC/Lea Connection to QRadar

I just need a sanity check here.  I have a customer with multiple VSs running on some 21ks.  For reasons too lengthy to go into on this thread they are moving all VSs to physical clusters.  I moved the first VS to a 6800 cluster last weekend.  The customer has QRadar setup to the customer's CMA with an OPSEC/Lea connection.  They are telling me they are not seeing logs from the new cluster, but still see all of the old logs as they would expect.  All logs are visible in the log server including the new hardware cluster. I am fairly certain on this, but this customer is making me doubt myself.  If you have an OPSEC/Lea connection to a log server, there is no way to filter which logs are sent, right? Or which FW logs are sent.  It has to be something on the QRadar side that is filtering I would think.  Am I mistaking here?  Or is there something that I'm missing which is obvious?Thanks,Paul
Dan_Zada
inside Logging and Reporting Monday
views 10240 20 20
Employee+

*New* Splunk App for Check Point Logs

Hello all, I’m happy to announce about a new Splunk app for Check Point logs. Check Point brings you an advanced and real-time threat analysis and reporting tool for Splunk. The Check Point App for Splunk allows you to respond to security risks immediately and gain network true insights. You can collect and analyze millions of logs from all Check Point technologies and platforms across networks, Cloud, Endpoints and Mobile. (view in My Videos) Key features are: Infinity Dashboards General overview Top attacks Detected and prevented events Events timeline Blades statistics Cyber Attack View – a unique ability to aggregate Check Point events per attack vector (cross all blades) Reconnaissance actions against the network Delivery methods Malicious emails Malicious file download Server Exploit Infected hosts SandBlast Events – predefined aggregation for mail and web attack vectors CIM Support – Check Point logs are mapped into CIM (Common Information Model) and can be analyzed using standard dashboards (such as Splunk Enterprise Security)More information on CIM can be found here: https://docs.splunk.com/Documentation/CIM/4.12.0/User/Overview Fast Deploy – an easy and fast deployment using the new Log Exporter     The app can be downloaded from Splunk base: Check Point App for Splunk | Splunkbase    User Guide – https://sc1.checkpoint.com/documents/App_for_Splunk/html_frameset.htm SK about the Log Exporter – http://supportcontent.checkpoint.com/solutions?id=sk122323   For any question, comment or suggestion, please contact cp_splunk_app_support@checkpoint.com.   Thank you! Dan Zada, Group Manager.
Josh_Dillig
Josh_Dillig inside Logging and Reporting Monday
views 214 1

Splunk Dashboard for Check Point logs

Anyone built dashboards/apps for Check Point logs once exported into Splunk besides the threat specific one that is in the Splunk repository? Its nice for summarizing IPS related stuff, but not general usage. We are looking for something that would mimic the functionality of SmartConsole; search feature, formatting, top talkers, etc. Thanks,Josh 
Dan_Zada
inside Logging and Reporting a week ago
views 3354 37 9
Employee+

Log Exporter Filtering

Hello all,I'm happy to inform you that we added a new feature to the log exporter - the ability to filter logs.Starting today, you will be able to configure which logs will exported, based on fields and values, including complex statements.More information, including basic and advanced filtering instructions, can be found in SK122323.If you have any question or comment, let me know.Thanks!Dan.
Vladimir
Vladimir inside Logging and Reporting a week ago
views 196

CPLogInvestigator issue

Hi there! Can anyone comment and suggest remediation for this: We are running a security checkup on CP provisioned all-in-one Management and Gateway (15400). All of a sudden, log retention dropped to two days. Looking at CPlogInvestigator, I am seeing:   [Expert@gw-4332cc:0]# ./CPLogInvestigatorCBinaryFile::Open: exit status falseCMappedBinaryFile::error opening file /opt/CPsuite-R80.30/fw1/log/static_analysis.logCLogFile::Open2: error: open (/opt/CPsuite-R80.30/fw1/log/static_analysis.log) for reading failedInvalid log file: /opt/CPsuite-R80.30/fw1/log/static_analysis.log Thank you for using log investigator tool. ==============================================================Start reading log file: /opt/CPsuite-R80.30/fw1/log/fw.log Start reading log file: /opt/CPsuite-R80.30/fw1/log/fw.log from log 0 ..........................Reading log file is DONE. Start reading log file: /opt/CPsuite-R80.30/fw1/log/2020-01-09_090503_952.log Start reading log file: /opt/CPsuite-R80.30/fw1/log/2020-01-09_090503_952.log from log 0 ....................Reading log file is DONE. Total scanned 8619134 logs out of 12547661 logs in fileScanned logs dates are from 09-01-2020 07:51:24 to 09-01-2020 09:29:30Observed blades:- Anti Malware- Application Control- IPS- N/A- New Anti Virus- URL Filtering- VPN-1 & FireWall-1 ======================================== Summary - Estimations based on findings: Log file size per day: 64.0486GB (126519398 logs) Estimated events per day:- Estimated events per day based on active blades: 1232090 Storage required per day:- SmartEvent: 5.7374GB- Log Server: 64.0486GB- Log Server + SmartLog: 128.0973GB Please refer to sk87263 to use these metrics and size your SmartEvent solution. The SK can be found at Check Point's Support Center :https://supportcenter.checkpoint.com/supportcenter/index.jsp   But just a few minutes earlier, I've been seeing numbers roughly half of those shown above: [Expert@gw-4332cc:0]# ./CPLogInvestigator -a -pCBinaryFile::Open: exit status falseCMappedBinaryFile::error opening file /opt/CPsuite-R80.30/fw1/log/static_analysis.logCLogFile::Open2: error: open (/opt/CPsuite-R80.30/fw1/log/static_analysis.log) for reading failedInvalid log file: /opt/CPsuite-R80.30/fw1/log/static_analysis.log Thank you for using log investigator tool. ==============================================================Start reading log file: /opt/CPsuite-R80.30/fw1/log/fw.log Start reading log file: /opt/CPsuite-R80.30/fw1/log/fw.log from log 0 .....Reading log file is DONE. Total scanned 799953 logs out of 799952 logs in fileScanned logs dates are from 09-01-2020 09:05:03 to 09-01-2020 09:07:51 ========================================Product log statistics (Per Day):Days of counting: 0.00194444Product name: Anti Malware Amount of logs: 2 Average: 1028Product name: Application Control Amount of logs: 10173 Average: 5231828Product name: Content Awareness Amount of logs: 89 Average: 45771Product name: Identity Awareness Amount of logs: 59 Average: 30342Product name: N/A Amount of logs: 239093 Average: 122962114Product name: IPS Amount of logs: 90 Average: 46285Product name: Threat Emulation Amount of logs: 70 Average: 36000Product name: URL Filtering Amount of logs: 9479 Average: 4874914Product name: VPN-1 & FireWall-1 Amount of logs: 540953 Average: 278204400 Total logs per day: Date | GB | Count2020-01-08 | 45.9551 | 3144136962020-01-09 | 12.3826 | 88459492fw.log | 0.2127 | 1599904 ==============================================================[Expert@gw-4332cc:0]# Any thoughts? By any measure, this seems to be an outrageous amount of logs for our environment,   Thank you,   Vladimir
Ants
Ants inside Logging and Reporting a week ago
views 593 8

FW logs shows in tracker but not in smartconsole logs

Hi All,Weird scenario atm.. we have a management server (with log server) running R80.30 with 4 clusters sending logs to it al working as expected..We added a new cluster (80.10) recently but for some weird reason I cannot see logs in the smartconsole..I can confirm logs are being sent correctly to the sms..If I open the console, go to 'logs & monitor', select 'new tab' and select logs and log view.. I see all the other FWs logs.. but no logs from the new cluster.. now here's the kicker..- the new cluster's logs are showing in the tracker fine.. along with al the other FWs..- also I can see the new cluster's logs in smartconsole only if I go to logs, select 'options', 'file' and then choose to 'open log file' and select the 'fw.log' - then i can see them.It is just when you open the default log tab none of the logs shows.. which is using the fw.log file also.so its only if I manually select to open the fw.log file that I can see the logs.. if that makes sense.Could this be a bug perhaps? or maybe need to reindex? any ideas?thanks in advance. 
B_P
B_P inside Logging and Reporting 2 weeks ago
views 854 17

R80.30 Netflow Setup

Pre R80.10 Netflow worked fine.Now on R80.30 I have two flows that are identical -- but one only shows Outbound and the other only shows Inbound BUT -- and this is perplexing -- it is the exact same traffic for both inbound and outbound flows -- i.e. source and destination are the same.Yes.. let that simmer for a while.I have one rule that's configured on the firewall and it's a rule that a lot of web traffic hits on.I'm using ManageEngine's Netflow Analyzer.For this traffic, I would expect there should be one flow and it should include both inbound and outbound traffic on the one interface (the internal interface it's hitting).