cancel
Showing results for 
Search instead for 
Did you mean: 
Create a Post
Logging and Reporting

Have questions about viewing logs with SmartView, generating reports with SmartEvent Event Management, or exporting logs to a SIEM with Log Exporter? This is where to ask!

rajesh_s
rajesh_s inside Logging and Reporting 8 hours ago
views 24588 20 9

Checkpoint Gateways are not sending the logs to Checkpoint management server

Hi All,We are using Checkpoint R77.30 firewall, Gateways are not sending the logs to Checkpoint management server, Is anyone has similar issue?.
Nuno_Lourenco1
Nuno_Lourenco1 inside Logging and Reporting 10 hours ago
views 1400 6

SmartView - Views

Hi CheckMates,Is there any way in SmartView (SmartEvent R80.10) we can configure a user to always open a specific view upon login (like a default view or a direct link to the view) ?ThanksRegardsNuno
MattDunn
MattDunn inside Logging and Reporting 11 hours ago
views 41 1

QoS Graph

A customer has QoS enabled, with rules to guarantee minimum and limit maximum bandwidth based on source IP (subnet).Is there a way to produce reports or graphs to show what bandwidth is being used over a period of time?  I.e. can I produce a graph showing bandwidth usage over a week for a particular subnet or QoS rule?So far I can't find any way of reporting on bandwidth used, to demonstrate to the customer that is constantly hitting its rule limit and that they need to increase the allowance in the rule.
cdrik
cdrik inside Logging and Reporting 22 hours ago
views 82 3

Enable tracking all rule not working after upgrade to R80.20 ?

Hello,Our clusterXl gateways are configured to send their tracked rules logs to our management servers and we have also enabled to send all rule logs to another dedicated log server. (configured in Reporting tool).Everything is working as expected in R77.30 but we have upgraded one of our cluster to r80.20 and since then this cluster does only logs rules with the track option set to 'log'. On our management server and also on our dedicated log server...Is it still possible to keep logs of all rule in R80.20 without being force to set all rule in 'log'?regards,Cedric
entsupport
entsupport inside Logging and Reporting Wednesday
views 110 1

Logs stopped coming in Management server at any point of time

Hello All, We are frequently facing issue with logging in our management server. At any point of time we are observing that management server stops receiving the logs & again after some time it starts working. We are not able to identify where the issue is happening. Request your support on this so we can resolve this issue asap. Regards,ENT Support
Peter_Baumann
Peter_Baumann inside Logging and Reporting Wednesday
views 244 3 1

Log Exporter stopped reading logs

Hello again,A new problem, this time with the log exporter:[Expert@cplog01p:0]# date Tue Jul 02 09:40:40 CEST 2019 [Expert@cplog01p:0]# cp_log_export status name: fw.domain.com status: Running (3986) last log read at: 27 Jun 11:51:02 debug file: /opt/CPrt-R80.20/log_exporter/targets/fw.domain.com/log/log_indexer.elg--> Log Exporter has stopped reading logs since some days but is still running.We did a cp_log_export restart and it worked again.Does someone know how to monitor the Log Exporter stopped working even when the process is still running?Is this problem known?Installed version of cplog01p:[Expert@cplog01p:0]# cpinfo -y all This is Check Point CPinfo Build 914000182 for GAIA [IDA] No hotfixes.. [CPFC] HOTFIX_R80_20_JUMBO_HF_MAIN [MGMT] HOTFIX_R80_20_JUMBO_HF_MAIN [FW1] HOTFIX_R80_20_JUMBO_HF_MAIN FW1 build number: This is Check Point Security Management Server R80.20 - Build 007 This is Check Point's software version R80.20 - Build 047 [SecurePlatform] HOTFIX_GOGO_LT_HALO_JHF [CPinfo] No hotfixes.. [DIAG] No hotfixes.. [Reporting Module] HOTFIX_R80_20_JUMBO_HF_MAIN [CPuepm] HOTFIX_R80_20_JUMBO_HF_MAIN [VSEC] HOTFIX_R80_20_JUMBO_HF_MAIN [SmartLog] No hotfixes.. [MGMTAPI] No hotfixes.. [R7520CMP] No hotfixes.. [R7540CMP] No hotfixes.. [R76CMP] No hotfixes.. [SFWR77CMP] No hotfixes.. [R77CMP] HOTFIX_R80_20_JHF_COMP [R75CMP] No hotfixes.. [NGXCMP] No hotfixes.. [EdgeCmp] No hotfixes.. [SFWCMP] No hotfixes.. [FLICMP] No hotfixes.. [SFWR75CMP] No hotfixes.. [CPUpdates] BUNDLE_R80_20_JUMBO_HF_MAIN_gogoKernel Take: 47 [rtm] No hotfixes.. 
JS_FW
JS_FW inside Logging and Reporting Tuesday
views 194 2

Log Exporter monitoring, and position "catch up"

Hi there,We've recently migrated to Log Exporter from OPSEC LEA (R80.20 SMS), exporting to LogRhythm. For the most part things are working great. Unfortunately, our IR team is reporting gaps in our logs. Wanted to know if there was a way to monitor for when the service restarts, and if there is a way to "catch up" from the last transmitted log point. There was a position marker they were able to use when they were pulling from OPSEC LEA, but now that we are pushing, they can't use that.Any help or guidance appreciated.
masher
inside Logging and Reporting Tuesday
views 585 7 2
Employee+

Log Exporter and Elasticsearch

There have been multiple posts requesting information about Check Point's options for integration with an ELK stack for logging. Many of the posts recommend using log exporter over LEA for exporting logs to that system. I have not seen a post that covers configuration of the ELK stack in order to use this exported data.  I put these behind spoiler tags as this is a fairly long winded post as I cover some of the basics before getting into the configuration options.  Screenshots Spoiler   Log data within the "Discover" section of Kibana.   Basic Dashboard     Log data within the "Discover" section of Kibana.   Basic Dashboard     Documentation Spoiler Software Revisions Check Point R80.30 (JHF 76) Elasticsearch 7.4.0 Logstash 7.4.0 Logstash Plugins logstash-input-udp logstash-filter-kv logstash-filter-mutate logstash-output-elasticsearch Kibana 7.4.0 NGINX Arch Linux   Elasticsearch What is Elasticsearch?From Elastic’s website:     Elasticsearch is a distributed, RESTful search and analytics engine capable of solving a growing number of use cases. As the heart of the Elastic Stack, it centrally stores your data so you can discover the expected and uncover the unexpected. Elasticsearch provides an API driven indexing solution for querying and storing large quantities of data. This can be used standalone or clustered solution. Elasticsearch also has additional deployment options providing real-time search integration with other database and storage platforms. In Arch Linux, Elasticsearch configuration files are stored under /etc/elasticsearch. This guide uses the default settings within the elasticsearch.yml file. The default configuration file has reasonable defaults and listens on TCP port 9200 on localhost. The JAVA configuration is stored in the jvm.options file. This is to adjust memory requirements used for the Java processes running Elasticsearch.   Kibana What is Kibana?From Elastic’s website:     Kibana lets you visualize your Elasticsearch data and navigate the Elastic Stack… Kibana is a customizable web interface that interacts with Elasticsearch in order to build dashboard, visualization, or search for stored data. In Arch Linux, the configuration folder is /etc/kibana. The kibana.yml file has a useful set of defaults that I recommend validating. server.port: 5601o This is the service port the server will listen on for inbound HTTP(s) connections. server.host: “localhost”o This will be the server name that kibana will use for this particular instance. Since this document only uses a single instance, the default name was used. server.basePath: “/kibana”o This will be the URL path used when running behind a web service or reverse-proxy. In this example, the requests will be sent to http://host/kibana server.rewriteBasePath: trueo This forces the kibana process to rewrite outbound responses to include the basePath in the URL. The default template documents the additional settings available for Kibana, but it is also available at the following link.o https://www.elastic.co/guide/en/kibana/master/settings.html   Logstash What is Logstash? From Elastic’s website:     Logstash is an open source, server-side data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to your favorite “stash.” (Ours is Elasticsearch, naturally.) Logstash provides a method for receiving content, manipulating that content, and forwarding it on to a backend system for additional use. Logstash can manipulates these streams into acceptable formats for storage and indexing or additional processing of the content. In Arch Linux, the configuration folder is /etc/logstash. The Arch Linux package starts the logstash process and reads the configuration files under /etc/logstash/conf.d. This allows logstash to use multiple input sources, filters, and outputs. For example, logstash could be configured as a syslog service and output specific content to an Elasticsearch cluster on the network. It could also have another configuration file that listens on port 5044 for separate data to export into a different Elasticsearch cluster. Logstash Plugins Depending on the Linux distribution, several logstash plugins might not be available from the distribution file repository. In order to add additional plugins, the use of git is required to import them from the Logstash plugin repository. The logstash-plugins repository are available online at https://github.com/logstash-plugins/ and their installation process is documented on that site.   NGINX NGINX is a web and reverse proxy server. It also has other capabilities such as load balancing and web application firewall. The configuration for nginx is located in the /etc/nginx directory. Since NGINX can have a wide array of configurations, only the configuration used to allow access to Kibana is documented below. The configuration below accepts connections to /kibana and forwards them to the localhost:5601 service. For additional configuration options for NGINX, see https://docs.nginx.com. Snippet from /etc/nginx/nginx.conf http {   include mime.types;   sendfile on;   keepalive_timeout 65;   ## This provides a reference to the Elasticsearch service listening on localhost:9200   upstream kibana {       server 127.0.0.1:5601;       keepalive 15;    }server {    listen 80;    server_name _;    location / {        root /usr/share/nginx/html;        index index.html index.htm;    }    location /kibana {        proxy_pass http://kibana;    }    error_page 500 502 503 504 /50x.html;    location = /50x.html {        root /usr/share/nginx/html;    }  }}   Check Point Log Exporter Check Point Log Exporter documentation can be found in SK article 122323 on Support Center. The configuration used in this document is very simple. The logstash configuration file is expecting the syslog messages to use the Splunk format. The Splunk format provides field delineation using the | character. This provides an easy option for Logstash’s KV filter to use in order to split the log fields. [Expert@vMgmt01:0]# cp_log_export add name lab01 target-server 10.99.99.11 target-port 514 protocol udp format splunk read-mode semi-unified Export settings for lab01 has been added successfully To apply the changes run: cp_log_export restart name lab01 [Expert@vMgmt01:0]#   Logstash Configuration There is a single configuration file defined for logstash in this example. Logstash will attempt to load all YML files in the configuration directory. Each file could have a different set of inputs, filters, and outputs.  The location of the YAML file might differ depending on the distribution.In this example, there are three primary sections: input, filter, and output. The input configuration tells the logstash process which plugins to run to receive content from external sources. Our example uses the UDP input plugin (logstash-input-udp) and is configured to act as a syslog service. Logstash has many other options for input types and content codecs. Additional information can be found by accessing the logstash Input plugin documentation at https://www.elastic.co/guide/en/logstash/current/input-plugins.html. input {    udp {        port => 514        codec => plain         type => syslog    }}   Logstash filters match, label, edit, and make decisions on content before it passes into Elasticsearch (ES). The filter section is where the patterns and labels are defined. In our example, logstash processes the syslog data through two separate filters. The kv filter is used to automatically assign labels to the received logs. Since the messages sent via log_exporter are in a <key>=<value> format, the kv filter was chosen. This provides a simpler mechanism than using the grok or dissect plugins for assigning those values. filter {      if [type] == "syslog" {        kv {             allow_duplicate_values => false             recursive => false             field_split => "\|"        }        mutate {            # Fields beginning with underscore are not supported by ES/Kibana, so rename them.             rename => { "__nsons" => "nsons" }             rename => { "__p_dport" => "p_dport" }             rename => { "__pos" => "pos" }             # Not necessary, just a field name preference.             rename => { "originsicname" => "sicname" }              # Example of removing specific fields             # remove_field => [ "connection_luuid", "loguid" ]              # String substitution             # Strip the O\=.*$ and the ^CN= in the field.             gsub => [                  "sicname", "CN\\=", "",                  "sicname", ",O\\=.*$",""             ]        }    }} The mutate plugin (logstash-filter-mutate) is used to manipulate the data as needed.  In the configuration above, the originsicname field is renamed to sicname. Additional fields can be dropped using the remove_field configuration.The configuration can be validated using the “-t” parameter when launching logstash. Configuration errors will be displayed and include the line numbers for the error. /usr/share/logstash/bin/logstash -t -f /etc/logstash/conf.d/checkpoint.yml   Kibana - Basic Configuration The default settings for Kibana are used in this document, so there are no additional configuration steps necessary. The NGINX configuration used in the earlier section will pass the requests to http://hostname/kibana to the http://localhost:5601 service. After opening the management page, Kibana needs to be configured to use the data contained within Elasticsearch. Go to Management -> Index Patterns  Select Create Pattern and add a pattern that will highlight the logstash indices and click next. The next step will ask for the Time filter setting. Select the @timestamp default and click “Create Index Pattern”   Once this step has been completed, Kibana can now present available data from Elasticsearch. Software Revisions Check Point R80.30 (JHF 76) Elasticsearch 7.4.0 Logstash 7.4.0 Logstash Plugins logstash-input-udp logstash-filter-kv logstash-filter-mutate logstash-output-elasticsearch Kibana 7.4.0 NGINX Arch Linux   Elasticsearch What is Elasticsearch?From Elastic’s website:     Elasticsearch is a distributed, RESTful search and analytics engine capable of solving a growing number of use cases. As the heart of the Elastic Stack, it centrally stores your data so you can discover the expected and uncover the unexpected. Elasticsearch provides an API driven indexing solution for querying and storing large quantities of data. This can be used standalone or clustered solution. Elasticsearch also has additional deployment options providing real-time search integration with other database and storage platforms. In Arch Linux, Elasticsearch configuration files are stored under /etc/elasticsearch. This guide uses the default settings within the elasticsearch.yml file. The default configuration file has reasonable defaults and listens on TCP port 9200 on localhost. The JAVA configuration is stored in the jvm.options file. This is to adjust memory requirements used for the Java processes running Elasticsearch.   Kibana What is Kibana?From Elastic’s website:     Kibana lets you visualize your Elasticsearch data and navigate the Elastic Stack… Kibana is a customizable web interface that interacts with Elasticsearch in order to build dashboard, visualization, or search for stored data. In Arch Linux, the configuration folder is /etc/kibana. The kibana.yml file has a useful set of defaults that I recommend validating. server.port: 5601o This is the service port the server will listen on for inbound HTTP(s) connections. server.host: “localhost”o This will be the server name that kibana will use for this particular instance. Since this document only uses a single instance, the default name was used. server.basePath: “/kibana”o This will be the URL path used when running behind a web service or reverse-proxy. In this example, the requests will be sent to http://host/kibana server.rewriteBasePath: trueo This forces the kibana process to rewrite outbound responses to include the basePath in the URL. The default template documents the additional settings available for Kibana, but it is also available at the following link.o https://www.elastic.co/guide/en/kibana/master/settings.html   Logstash What is Logstash? From Elastic’s website:     Logstash is an open source, server-side data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to your favorite “stash.” (Ours is Elasticsearch, naturally.) Logstash provides a method for receiving content, manipulating that content, and forwarding it on to a backend system for additional use. Logstash can manipulates these streams into acceptable formats for storage and indexing or additional processing of the content. In Arch Linux, the configuration folder is /etc/logstash. The Arch Linux package starts the logstash process and reads the configuration files under /etc/logstash/conf.d. This allows logstash to use multiple input sources, filters, and outputs. For example, logstash could be configured as a syslog service and output specific content to an Elasticsearch cluster on the network. It could also have another configuration file that listens on port 5044 for separate data to export into a different Elasticsearch cluster. Logstash Plugins Depending on the Linux distribution, several logstash plugins might not be available from the distribution file repository. In order to add additional plugins, the use of git is required to import them from the Logstash plugin repository. The logstash-plugins repository are available online at https://github.com/logstash-plugins/ and their installation process is documented on that site.   NGINX NGINX is a web and reverse proxy server. It also has other capabilities such as load balancing and web application firewall. The configuration for nginx is located in the /etc/nginx directory. Since NGINX can have a wide array of configurations, only the configuration used to allow access to Kibana is documented below. The configuration below accepts connections to /kibana and forwards them to the localhost:5601 service. For additional configuration options for NGINX, see https://docs.nginx.com. Snippet from /etc/nginx/nginx.conf http {   include mime.types;   sendfile on;   keepalive_timeout 65;   ## This provides a reference to the Elasticsearch service listening on localhost:9200   upstream kibana {       server 127.0.0.1:5601;       keepalive 15;    }server {    listen 80;    server_name _;    location / {        root /usr/share/nginx/html;        index index.html index.htm;    }    location /kibana {        proxy_pass http://kibana;    }    error_page 500 502 503 504 /50x.html;    location = /50x.html {        root /usr/share/nginx/html;    }  }}   Check Point Log Exporter Check Point Log Exporter documentation can be found in SK article 122323 on Support Center. The configuration used in this document is very simple. The logstash configuration file is expecting the syslog messages to use the Splunk format. The Splunk format provides field delineation using the | character. This provides an easy option for Logstash’s KV filter to use in order to split the log fields. [Expert@vMgmt01:0]# cp_log_export add name lab01 target-server 10.99.99.11 target-port 514 protocol udp format splunk read-mode semi-unified Export settings for lab01 has been added successfully To apply the changes run: cp_log_export restart name lab01 [Expert@vMgmt01:0]#   Logstash Configuration There is a single configuration file defined for logstash in this example. Logstash will attempt to load all YML files in the configuration directory. Each file could have a different set of inputs, filters, and outputs.  The location of the YAML file might differ depending on the distribution.In this example, there are three primary sections: input, filter, and output. The input configuration tells the logstash process which plugins to run to receive content from external sources. Our example uses the UDP input plugin (logstash-input-udp) and is configured to act as a syslog service. Logstash has many other options for input types and content codecs. Additional information can be found by accessing the logstash Input plugin documentation at https://www.elastic.co/guide/en/logstash/current/input-plugins.html. input {    udp {        port => 514        codec => plain         type => syslog    }}   Logstash filters match, label, edit, and make decisions on content before it passes into Elasticsearch (ES). The filter section is where the patterns and labels are defined. In our example, logstash processes the syslog data through two separate filters. The kv filter is used to automatically assign labels to the received logs. Since the messages sent via log_exporter are in a <key>=<value> format, the kv filter was chosen. This provides a simpler mechanism than using the grok or dissect plugins for assigning those values. filter {      if [type] == "syslog" {        kv {             allow_duplicate_values => false             recursive => false             field_split => "\|"        }        mutate {            # Fields beginning with underscore are not supported by ES/Kibana, so rename them.             rename => { "__nsons" => "nsons" }             rename => { "__p_dport" => "p_dport" }             rename => { "__pos" => "pos" }             # Not necessary, just a field name preference.             rename => { "originsicname" => "sicname" }              # Example of removing specific fields             # remove_field => [ "connection_luuid", "loguid" ]              # String substitution             # Strip the O\=.*$ and the ^CN= in the field.             gsub => [                  "sicname", "CN\\=", "",                  "sicname", ",O\\=.*$",""             ]        }    }} The mutate plugin (logstash-filter-mutate) is used to manipulate the data as needed.  In the configuration above, the originsicname field is renamed to sicname. Additional fields can be dropped using the remove_field configuration.The configuration can be validated using the “-t” parameter when launching logstash. Configuration errors will be displayed and include the line numbers for the error. /usr/share/logstash/bin/logstash -t -f /etc/logstash/conf.d/checkpoint.yml   Kibana - Basic Configuration The default settings for Kibana are used in this document, so there are no additional configuration steps necessary. The NGINX configuration used in the earlier section will pass the requests to http://hostname/kibana to the http://localhost:5601 service. After opening the management page, Kibana needs to be configured to use the data contained within Elasticsearch. Go to Management -> Index Patterns  Select Create Pattern and add a pattern that will highlight the logstash indices and click next. The next step will ask for the Time filter setting. Select the @timestamp default and click “Create Index Pattern”   Once this step has been completed, Kibana can now present available data from Elasticsearch.  

OPSEC/Lea Connection to QRadar

I just need a sanity check here.  I have a customer with multiple VSs running on some 21ks.  For reasons too lengthy to go into on this thread they are moving all VSs to physical clusters.  I moved the first VS to a 6800 cluster last weekend.  The customer has QRadar setup to the customer's CMA with an OPSEC/Lea connection.  They are telling me they are not seeing logs from the new cluster, but still see all of the old logs as they would expect.  All logs are visible in the log server including the new hardware cluster. I am fairly certain on this, but this customer is making me doubt myself.  If you have an OPSEC/Lea connection to a log server, there is no way to filter which logs are sent, right? Or which FW logs are sent.  It has to be something on the QRadar side that is filtering I would think.  Am I mistaking here?  Or is there something that I'm missing which is obvious?Thanks,Paul
meirh
meirh inside Logging and Reporting Monday
views 172 4

Multiple smartdashboard users

Hello In our organization, multiple users has read only access to our management server in order to view logs via smartlog.What is the impact on the management server when multiple concurrent users always in connected stat and searching for logs?.CPU utilization? Memory?.Our Sms server is very slow and we are trying to identify why (the server is very strong, 16 cores and 64G memory).SMS Version is 80.10.Thank you!. 
Kenneth_Greger1
Kenneth_Greger1 inside Logging and Reporting a week ago
views 2611 7 2

Log Indexer crashing - SmartLog not working

HiWe have been struggling, since before Christmas, with our R80.10 SmartCenter server (R80.10 - Build 439).Every now and then (after a few hours and/or days) the SmartLog is not working. Meaning that it is not possible to view the log files in the SmartDashboard GUI client (SmartView).We can see that the SmartCenter is receiving the logs, but the INDEXER service is crashing.A workaround has been to do evstop.Then look into $INDEXERDIR/log/log_indexer.elg and find the offending log file that the INDEXER process is not able to parse. Typically the file name it will show up right before an entry that reads:log_indexer 30145 3804232592] Jan 16:05:41] Start reading 127.0.0.1:2019-01-02_151203_1.log [1546423998] at position 5738761 [2 Jan 16:05:41] CBinaryLogFile::ReplaceFileToTableMemStringID: error - can't get mem string id[2 Jan 16:05:41] CBinaryLogFile::ReplaceTableStringId error: couldn't get file string_id, will set to default NULL VALUE[2 Jan 16:05:41] CBinaryLogFile::ReplaceFileToTableMemStringID: error - can't get mem string id[2 Jan 16:05:41] CBinaryLogFile::ReplaceTableStringId error: couldn't get file string_id, will set to default NULL VALUEThen we edit the file $INDEXERDIR/data/FetchedFiles, mark the offending file as finished - and the INDEXER will move on to the next log file. This procedure is described in sk116117.In some cases it does not indicate which files is problematic at all. What we do then is to evstop;evstart - and (usually) after some time it will show the offending log file.We have tried to re-install SmartCenter, but the problem persists.Both our vendor and CheckPoint is involved in the case, but so far they have not come up with a solution.Any input is greatly appreciated./Kenneth
Dilian_Chernev
Dilian_Chernev inside Logging and Reporting a week ago
views 195 3

Duplicated logs in Smartconsole with Management HA

Hello community 🙂I have the a setup with two management servers R80.20 - Primary and Secondary, and all gateways are logging to both servers simultaneously.In smartconsole logs, there are two rows displayed for each connection , and only difference is "Log Server Origin" field - first log is from primary management and second is from secondary management.Is this a normal behaviour? Can it be tuned somehow to show only logs from primary management? Thanks
MiteshAgrawal15
MiteshAgrawal15 inside Logging and Reporting a week ago
views 234 5

Audit events not received from Checkpoint R80.10 using Log Exporter Solution

Hi Team,We are facing issues in getting audit logs from Checkpoint R80.10. We have MicroFocus ArcSight in our environment.We are receiving the traffic logs without any issues and the same is getting parsed properly. But the audit logs which we receive in "SmartConsole" is parsed as "Log" and nothing much is captured in raw event. At the same time from other firewall which are on R80.20 we are receiving "Log in", " Log Out", "Modify Rule" etc events and username and other details are captured.The targetconfiguration.xml settings file is set as "all"<log_types></log_types><!--all[default]|log|audit/--> Also, the output of cp_log_export show command on the management server is:name: ArcSightLogenabled: truetarget-server: Agent Server IPtarget-port: 514protocol: udpformat: cefread-mode: semi-unified In one of the thread I saw that the domain-server argument needs to be provided while configuring the log export destination. Can someone please check the above config and tell if it was added or not?Please help in rectifying the issue. What configuration we need to use in order to receive audit logs from R80.10 using Log exporter solution.TIARegards,Mitesh Agrawal
MIchael_Hovis
MIchael_Hovis inside Logging and Reporting a week ago
views 1099 6 1

Export Logs To LogRhythm using Log Exporter

Has anyone used Log Exporter to export logs to LogRhythm?  I have a Check Point managment server that is also the log server running R80.20.  I've configured Log Exporter and am sending logs to LogRhythm using the CEF format.  However, LogRhythm says they cannot parse the logs.  Has anyone else run into this problem and found a solution?Thanks.
David_Spencer
David_Spencer inside Logging and Reporting a week ago
views 320 10

High I/O queue size and wait

I'm trying to do some optimization on my 5400 units, as we are usually at over 80% CPU and memory utilization, even hitting swap memory. While doing some digging in cpview I found that my I/O average changes quickly between about a 1000 wait and 7,000,000,000,000 wait. Absurdly large. also very large queue size of ~ 80,000,000What should I do with this information?