Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Employee+
Employee+

Log Exporter guide

Hello All,

We have recently released the Log Exporter solution.
A few posts have already gone up and the full documentation can be found at sk122323.

However, I've received a few questions both on and offline and decided to create a sort of log exporter guide.

But before I begin I’d like to point out that I’m not a Checkpoint spokesperson, nor is this an official checkpoint thread.

I was part of the Log Exporter team and am creating this post as a public service.

I’ll try to only focus on the current release, and please remember anything I might say regarding future releases is not binding or guaranteed.
Partly because I’m not the one who makes those decisions, and partly because priorities will shift based on customer feedback, resource limitations and a dozen other factors. The current plans and the current roadmap is likely to drastically change over time.

And just for the fun of it, I’ll mostly use the question-answer format in this post (simply because I like it and it’s convenient).

 

Log Exporter – what is it?

Performance

Filters

Filters: Example 1

Filters: Example 2

Gosh darn it, I forgot something! (I'll edit and fill this in later)

Feature request

Labels (3)
139 Replies
Highlighted
Contributor

Hi Yonatan,

Do you know if "CMA IP" features was already integrated on the new version of Nov 6th? I could understand "what's new" in this new version.

Thanks,

Ivo

0 Kudos
Reply
Highlighted
Collaborator

Hi Yonatan,

Would you mind telling me what format to use to send logs from R77.30.03 endpoint management server to R80.10 gateway management server?  It would be helpful if you could provide an example using this script.

cp_log_export add name <name> [domain-server <domain-server>] target-server <target-server> target-port <target-port> protocol <(udp|tcp)> format <(syslog)|(cef)> [optional arguments]

Thank you,

D. Roddy

Highlighted
Employee+
Employee+

HI Dan,

I'm sorry to say that we did not test log exporter with R77.30.03. 

 

I'm not familiar with this use case (sending logs from checkpoint to checkpoint), and to be honest, I personally have no experience with R77.30.03 (or endpoint in general).

I'll try and consult with some of my colleagues from endpoint but at this point I think that the answer is that it's not officially supported.

 

I think it's probably good advice to also request this through the regular channels (SE/RFE/solution center/etc.).

There is always a long laundry list of feature to add and test, prioritizing between them is often influenced by customer feedback and official requests. 

 

Regards,

 Yonatan 

0 Kudos
Reply
Highlighted
Contributor

Hi Yonatan,

Just a quick question, i'm i right in saying log exporter only installs on your management server not on your gateways?

0 Kudos
Reply
Highlighted
Employee+
Employee+

Hi Conor,

The log exporter reads the logs from the local machine ($FWDIR/log by default) and exports them.

It relies on the indexing infrastructure to read the logs, which means the server has to have this infrastructure installed for it to be a valid target, which excludes gateways.

 

The generic answer would be – install the log exporter on the log server that stores the logs you wish to export.

Regards,

 Yonatan 

Highlighted
Explorer

Can I get medium to high IPS events in syslog?

0 Kudos
Reply
Highlighted
Employee+
Employee+

Hello John,

I actually addressed something very similar in the FAQ:

That’s great! But I don’t really need the Application Control logs either. I only want the IPS logs to be sent. How can I do that?

Unfortunately, you can’t do that in this release.

You can't filter by specific blade or by specific action/severity/protection etc.

You can only filter by the field itself, not by the specific content of the field.

So while you can't set up a filter saying 'blade=IPS' you can set up a filter that says 'protection_name is required' which will mean that only logs with the 'protection_name' field will be sent.

But again, you will not be able to set up a filter that says 'severity=high'.

Those types of filters are on the roadmap but are not yet supported.

Yonatan 

0 Kudos
Reply
Highlighted

Hi,

Is this tool able to export the logs in checkpoint log format to an external log server to view in smartview tracker/smartlog without using syslog to checkpoint log parser?

Thanks.

Regards

Wilson

0 Kudos
Reply
Highlighted
Employee+
Employee+

Hello Swee,

Let me start by saying we did not test this use case. Everything I'm going to say is my opinion, but I could be mistaken as I haven't actually tried this.

The tool can export logs in the checkpoint format ('as is'). The difficulty isn't on the exporting side but on the receiving side. 

The receiving end will not be able to recognize that these logs are arriving from a 'legal' checkpoint server (with a SIC connection). The only way for it to accept those logs would be as a syslog. And while this is possible (there is a checkbox on the log server that enables it to receive syslogs), the logs will be treated as syslog.

That means that the log server will generate a syslog log and the payload will be put into the message field.

So while the data will be sent, it will not be parsed as a checkpoint log but rather as raw data.

Yonatan 

0 Kudos
Reply
Highlighted

Hi Yonatan,

Thanks for your reply.

If this is the case, does Checkpoint has any syslog to checkpoint log parser as per sk55020?

My external log server is not SIC with the management server that is why we are looking at sk55020.

I tried to use the log parser editor, but there are too many log fields in the syslog as compared to other syslogs and it varies with different versions. eg. r77.30 and r80.10.

Thanks.

Regards

Wilson

0 Kudos
Reply
Highlighted
Employee+
Employee+

Hi Wilson,

Again most of what I'm about to say is my own personal opinion.

I see a lot of issues with allowing a log server to accept logs from a 3rd party source without SIC or another type of authentication. It opens up your environment for log injection from unsanctioned sources.

This is why we added the TLS option to the log exporter. This was a feature that we had to develop for the log exporter specifically. this means that we are able to send logs using this mechanism. But for incoming logs, the mechanism in use is different.

This means that something will likely have to be created to either add SIC support to the log exporter somehow or add incoming TLS support to the log server itself, or something else along those lines.

There might be other options that I'm not considering or a way to easily manipulate the configuration to use an existing option, but it seems to me that it won't work (securely) out of the box and will need some development effort to make it work.

I'll raise this use case as something that we should look into as a roadmap item. I would also suggest that you try to ask for this through official channels  (SE/Solution Center/etc.). First off they might have suggestions that I haven't considered or options that I'm unaware of, and secondly, if enough such requests are made by customers it can have a major impact on prioritization of feature development.

Regards,

 Yonatan 

0 Kudos
Reply
Highlighted
Contributor

Hi guys, Im using arcsight and Im having trouble setting this up.

I think its on the connector side, log exporter is reporting:

 

[log_indexer 23554 4107553680]@SmartEvent-vm[6 Jun 17:17:15] SyslogTCPSender::connect: Failed to initialize socket (x.x.x.x:514)

[log_indexer 23554 4107553680]@SmartEvent-vm[6 Jun 17:17:15] TcpTlsSender::connect: Failed to create socket.

 

The added lines to agent.properties should be ?

agents[0].syslogng.tls.keystore.file=user/agent/syslog-ng.p12
agents[0].syslogng.tls.keystore.alias=syslogng-alias

or

 

syslogng.tls.keystore.file=user/agent/syslog-ng.p12
syslogng.tls.keystore.alias=syslogng-alias

I've tried both ways, and no go. 

Also where should I put syslog-ng.p12 file ?

0 Kudos
Reply
Highlighted
Employee+
Employee+

Hello Rodrigo,

This question should probably be directed to an ArcSight expert (which I am definitely not).

I believe that the syslog-ng.p12 file should be located in your <connector folder>/current/user/agent/ folder (but I think you can use other locations if you specify it in the setting).

I think that you should use the commands as the sk instructs:

syslogng.tls.keystore.file=user/agent/syslog-ng.p12

syslogng.tls.keystore.alias=syslogng-alias

If it sounds as if I'm unsure of my answers, that's because I'm unsure of my answers Smiley Happy

I would definitely consult with an ArcSight expert on these points.

I would also suggest that this might be better handled via a support ticket. Debugging TLS connection is extremely difficult (at least for me...) under even ideal circumstances, and a forum post is less than ideal medium for debugging.

HTH 

 Yonatan 

0 Kudos
Reply
Highlighted
Participant

Hi, Just to confirm I am reading this correctly.  I have a vsx with 5 firewalls and need to send logs from just 3 to a qradar siem. In cplogtosyslog I could filter on the CN=,O= to send these logs. 

If I read this thread correctly I cannot do that with Logexporter?

0 Kudos
Reply
Highlighted
Employee+
Employee+

Hi Petrus,

This is correct.

We have some filtering options (as described in this post), but advanced filters are still a major gap and a major item on our roadmap.

Filtering to only one GW is, unfortunately, an example of the gap.

0 Kudos
Reply
Highlighted
Participant

Hello,

We upgraded one of our lower lifecycle environments to R80.20 EA about a month ago, and I'm trying to configure the log exporter tool to send data to our staging Splunk environment (so that we can replace OPSEC LEA).  The management server is MDS with three domains, and I've successfully added the exporter (syslog) to one of the domains.  I've deleted and recreated the exporter using a variety of settings, but I'm seeing the same behavior each time.

Our Splunk administrator reports that data is successfully making it to his end (i.e. a file is created in the directory where all of the syslog data lands), but when you open up the file it just shows the timestamp, our MDS server name, the word CheckPoint, and the current process ID of the log exporter daemon (see below).  It looks like it's going all the way back to the start of the previous days' log, but it's not populating it with the actual information.

Jul 10 02:48:31 <MDS_SERVER_NAME> CheckPoint[13032]

Jul 10 02:48:31 <MDS_SERVER_NAME> CheckPoint[13032]

Jul 10 02:48:31 <MDS_SERVER_NAME> CheckPoint[13032]

Jul 10 02:48:31 <MDS_SERVER_NAME> CheckPoint[13032]

Jul 10 02:48:31 <MDS_SERVER_NAME> CheckPoint[13032]

Jul 10 02:48:31 <MDS_SERVER_NAME> CheckPoint[13032]

Jul 10 02:48:31 <MDS_SERVER_NAME> CheckPoint[13032]

Am I missing something easy here?

Thanks,

Ryan

0 Kudos
Reply
Highlighted
Employee+
Employee+

Hi Ryan,

This sounds like something that should definitely be investigated.

I think that this can probably best be addressed via a remote session.

I think the best approach here is to open a ticket with TAC and have one of their engineers look at the settings.

There is something fishy in this output and this format: 

Jul 10 02:48:31 <MDS_SERVER_NAME> CheckPoint[13032]

This doesn't look like any of the preconfigured formats we use. I suspect that this specific header is not actually generated by checkpoint but probably by some other server somewhere in the path.

But figuring this out from a forum post is unlikely. 

As I stated before - the best approach is probably via a ticket with TAC.

HTH 

 Yonatan 

0 Kudos
Reply
Highlighted
Explorer

Hello,

I have recently deployed this tool onto a few of our R80.10 log servers and have it configured to send logs to an Arcsight collector via CEF.  The arcsight admin has mentioned the headers seem to be a bit different compared to what they are seeing from our R77.30 gateway.

From our R80.10 log 

CEF:0|Check Point|VPN-1 & FireWall-1|Check Point|Log|Log|Unknown| eventId=458052771 proto=TCP catdt=Firewall art=1531238932768

And from our R77.30 gateway

CEF:0|Check Point|VPN-1 & FireWall-1||accept|accept|Low| eventId=55639967897

Specifically they were asking why these logs were showing up with accepts/denies previously where as now it looks like we are just sending "log"/"log" over to them.  

Is this something that is configurable on the checkpoint end?

Thanks.

0 Kudos
Reply
Highlighted
Employee+
Employee+

Hi Bryan,

It feels as if we haven't spoken in a very long time Smiley Happy

The CEF header is actually identical in both R77.30 and R80.10. It uses the exact same code.

The difference is likely from the logs themselves which have changed over time (especially when you compare R77.30 to R80.10).

You can actually see how the CEF header is built in the CefFormatDefinition.xml file.

CEF:Version|Device Vendor|Device Product|Device Version|Signature ID|Name|Severity|Extension

You're asking about the 5th and 6th position - 'Signature ID' and 'Name'.

The Signature ID is defined as (I edited this to make it more readable):

<default_value>Log</default_value>    // If nothing else fits use this value
<assign_order>first</assign_order>     // use the first value you find from the following list of possible values
attack , protection_type , verdict, match_table, protection_type, verdict, matched_category, dlp_data_type_name,  primary_application, app_category, app_properties

The Name is defined as:

<default_value>Log</default_value>   // If nothing else fits use this value
<assign_order>first</assign_order>    // use the first value you find from the following list of possible values
protection_name, primary_application, appi_name, message_info, protection_name, service_id

So there are multiple possible values depending on the specific log you use, with a default value of 'Log' in case you don't find any other valid value.

Not sure how the second 'accept' got there as it doesn't seem to fit any of the categories, but you'll have to look at the specific example.

In any case, once you understand the logic it's easy to reverse engineer the header and see how it was created. It follows very basic assignment rules.

Hope this helped clarify some of the issues.

Yonatan 

Highlighted
Explorer

Thank you so much again for the chat regarding vSec Yonatan Smiley Happy  

I must be blind, because I do not see a CefFormatDefinition.xml file in the $EXPORTERDIR or $EXPORTERDIR/targets directory.

0 Kudos
Reply
Highlighted
Employee+
Employee+

Hi Bryan,

You are correct. I forgot to specify that all FormatDefinition files (Cef, syslog, etc.) are under the conf folder.

$EXPORTERDIR/targets/<name>/conf/CefFormatDefinion.xml

HTH

 Yonatan 

Highlighted
Champion
Champion

Yonathan, If I were to install a separate Log server with SIC to management, would I be able to run Log Exporter from there? We have a number of customers that run a Log server in their own environment that we just setup as an additional Log server and now the customer wants to send his Log data to 2 different destinations. As they log about 10GB a day we don't want this data to be sent twice from our management server with 50 customers on it.

So the main question here is will we be able to run 2 log exporter sessions, one in syslog format to a IBM system and a CEF exporter to a Arcsight SIEM?

Regards, Maarten
0 Kudos
Reply
Highlighted
Employee+
Employee+

Hi Maarten,

Yes, you can run the Log Exporter from any CheckPoint server installed as a log server.

You can also have multiple deployments on the same server using different configurations.

By BM do you mean QRadar? if so, I believe QRadar can also work with CEF.

I would also point you to my comment about LEEF (the native QRadar format):

What’s the deal with LEEF – is it support or not?

Yes… with some caveats.

We are not yet fully LEEF compliant in that the timestamp is sent in epoch (which is not supported by LEEF). We do however have an ongoing collaboration with IBM and they plan to update LEEF to support epoch format as well.
Once they do that we will be LEEF compliant.
Unfortunately, I don’t have access to any of their timetables and don’t know when they are actually going to do this.

And while it's been several months since I've posted this, the status of LEEF hasn't changed. It's something we hope to fully support in the future, but are dependant on IBM to first implement some changes on their end.

HTH 

 Yonatan 

Highlighted
Explorer

Hello Yonatan,

I'm testing 80.10 & log export utility, and specifically looking for IPS,AV logs, I''ve just noticed that AV logs including source port but IPS logs does not. (source port for related event included in traffic info,as a new line)

results are the same for CEF & Syslog

part of anti-virus log (all fields,sourceport included)

......;s_port=35679;service=80;malware_action=Malicious file/exploit download;protection_name=Malicious Binary.TC.xxxxxx;protection_id=07597e9Dd;protection_type=protection;severity=1........

sourceport included first event of IPS log  (prevent log)

.....time=1533316810;src=xxx;dst=xxx;s_port=35681;service=80;action=Prevent;flags=411904;......
no sourceport in second event (part of IPS event)
t....;protection_name=Oracle WebLogic WLS Security Component Remote Code Execution (CVE-2017-10271);protection_id=asm_dynamic_prop_CVE_2017_10271;... there are more fields but no sourceport
is this expected a behaviour or am I missing something (log unification ?) 
I've also R77.30 setup, 7730 management addon installed,syslog configured through dashboard, it's possible to get the whole information as one line including sourceport and attack info.
0 Kudos
Reply
Highlighted
Employee+
Employee+

Hi Onur,

You probably hit the nail directly on the head with your comment about log unification.

The thing to remember about the Log Exporter is that it's mostly an infrastructure service feature. We take the data (logs), manipulate how it looks (without adding or subtracting data from the payload*) and forward it on.

Each blade owner (IPS, AV, APPI etc.) decides on his own how the logs are generated and updated over time.

(* - We do add headers, and if you use filters we also remove data. We also allow the use of callback functions that manipulate data. )

As for the changes between R77.30 to R80+ those are again, likely related to the changes in the logs themselves.

The logs do change over time, from either new features developed or from the desire to improve the logs themselves - their readability, and overall usefulness. 

We actually have several ongoing projects to improve user experience with logs that will change things about their look and feel and sometimes even content. Most of those projects will mature and be published in future versions, and some will wither and die if we decide they don't actually improve the current state. We are always striving to improve the user experience wherever and whenever we can, and that means that logs change over time.

I can add that one of the features we are currently developing for the Log Exporter is a new optional mode called semi-unified which will combine some feature of raw mode and unified mode.

This mode actually already existed in the LEA OPSEC feature, and we are now integrating it into the Log Exporter.

Update logs will still be sent as they arrive, but will now be sent as a unified log. This will slightly increase the bandwidth (I say slightly because updates, in general, are a very small percentage of the overall number of logs) but should make the update logs more readable.

Let me give you an example of a log + update logs in raw mode vs semi-unified mode. This is probably not the most interesting example, but it's one I have on hand and makes the differences easy to understand. It's an Application Control update of an ongoing session, updating the browse time and the number of bytes (I obfuscated sensitive data, and removed some of the fields that don't really have an impact on this example):

Raw mode:

Event:
time=1504750545|hostname=hugo1-take-421|loguid={0x59b0abd1,0x19d,0x101a8c0,0xc0000000}|product=Application Control|action=Allow|origin=X.X.X.X|app_category=Computers / Internet|app_desc=Google offers a variety of tools and online services and encourages developers to use their tools' APIs. A key element in these products is data communication with Google's servers, which may be generated without an active request by the user. Supported from: R75.|app_id=60340676|app_properties=Computers / Internet, SSL Protocol, Low Risk, Search Engines / Portals|app_risk=2|app_rule_id={6999AABA-B5F8-4EA6-8959-E355723635B2}|app_sig_id=60340676:15|appi_name=Google Services|dst=X.X.X.X|matched_category=Computers / Internet|proto=17|proxy_src_ip=X.X.X.X|s_port=51580|service=443|src=X.X.X.X|

Update_1:

time=1504750556|hostname=hugo1-take-421|loguid={0x59b0abd1,0x19d,0x101a8c0,0xc0000000}|product=Application Control|action=Allow|origin=X.X.X.X|app_id=60340676|browse_time=1|bytes=52|dst=X.X.X.X|proto=17|received_bytes=0|s_port=51580|sent_bytes=52|service=443|src=X.X.X.X|suppressed_logs=4|

Update_2:

time=1504751146|hostname=hugo1-take-421|loguid={0x59b0abd1,0x19d,0x101a8c0,0xc0000000}|product=Application Control|action=Allow|origin=X.X.X.X|app_id=60340676|browse_time=10|bytes=1642|dst=X.X.X.X|proto=17|received_bytes=0|s_port=51580|sent_bytes=1642|service=443|src=X.X.X.X|suppressed_logs=4|

Semi unified:

Event:

time=1504750545|hostname=hugo1-take-421|loguid={0x59b0abd1,0x19d,0x101a8c0,0xc0000000}|product=Application Control|action=Allow|origin=X.X.X.X|app_category=Computers / Internet|app_desc=Google offers a variety of tools and online services and encourages developers to use their tools' APIs. A key element in these products is data communication with Google's servers, which may be generated without an active request by the user. Supported from: R75.|app_id=60340676|app_properties=Computers / Internet, SSL Protocol, Low Risk, Search Engines / Portals|app_risk=2|app_rule_id={6999AABA-B5F8-4EA6-8959-E355723635B2}|app_sig_id=60340676:15|appi_name=Google Services|dst=X.X.X.X|matched_category=Computers / Internet|proto=17|proxy_src_ip=X.X.X.X|s_port=51580|service=443|src=X.X.X.X|

Update_1:

time=1504750545|hostname=hugo1-take-421|loguid={0x59b0abd1,0x19d,0x101a8c0,0xc0000000}|product=Application Control|action=Allow|origin=X.X.X.X|app_category=Computers / Internet|app_desc=Google offers a variety of tools and online services and encourages developers to use their tools' APIs. A key element in these products is data communication with Google's servers, which may be generated without an active request by the user. Supported from: R75.|app_id=60340676|app_properties=Computers / Internet, SSL Protocol, Low Risk, Search Engines / Portals|app_risk=2|app_rule_id={6999AABA-B5F8-4EA6-8959-E355723635B2}|app_sig_id=60340676:15|appi_name=Google Services|browse_time=1|bytes=52|dst=X.X.X.X|lastupdatetime=1504750556|matched_category=Computers / Internet|proto=17|proxy_src_ip=X.X.X.X|received_bytes=0|s_port=51580|sent_bytes=52|service=443|src=X.X.X.X|suppressed_logs=4|

Update_2:

time=1504750545|hostname=hugo1-take-421|loguid={0x59b0abd1,0x19d,0x101a8c0,0xc0000000}|product=Application Control|action=Allow|origin=X.X.X.X|app_category=Computers / Internet|app_desc=Google offers a variety of tools and online services and encourages developers to use their tools' APIs. A key element in these products is data communication with Google's servers, which may be generated without an active request by the user. Supported from: R75.|app_id=60340676|app_properties=Computers / Internet, SSL Protocol, Low Risk, Search Engines / Portals|app_risk=2|app_rule_id={6999AABA-B5F8-4EA6-8959-E355723635B2}|app_sig_id=60340676:15|appi_name=Google Services|browse_time=10|bytes=1694|dst=X.X.X.X|lastupdatetime=1504751146|matched_category=Computers / Internet|proto=17|proxy_src_ip=X.X.X.X|received_bytes=0|s_port=51580|sent_bytes=1694|service=443|src=X.X.X.X|suppressed_logs=8|

So let's try to analyze what we're seeing here.

First off some fields that were removed from the updates have been restored. Examples of such fields in this example are 'Application Name', 'Application Category', 'Application Description' etc. - for IPS logs probably the s_port you talked about will be here as well. In the original update logs, you would have had a hard time understanding to which application the update is relevant. You would have had to use the loguid to find the original log and make the connection.

Some fields with new information had the information replaced - the browse time from 0 to 1 to 10.

Othe fields had their information updated - in the original update the bytes went from 0 to 52 to 1642, while in the new mode they went from 0 to 52 to 1694.

The original updates just show the bytes sent during the updated slice while the semi unified mode keep an accurate count of the overall current bytes.

Edit: Some fields have their values preserved - in the original mode each update has its own time, but in the new mode each update still shows the time when the event occurred (the original timestamp).

Each field has its own logic of how the update is performed based on its content.

HTH

 Yonatan

0 Kudos
Reply
Highlighted
Explorer

thank you for detailed information Yonatan, is there any ETA for this release ?

0 Kudos
Reply
Highlighted
Employee+
Employee+

Hi Onur,

Unfortunately, I can't comment on that.

While I'm a Check Point employee and I worked on the Log Exporter project and have been trying to give the 'inside scoop' on the project, I am by no means an official Check Point spokesperson.

As I've tried to make very clear in several locations on this thread everything I say is based on my own personal knowledge and opinion, and I've tried very hard to skate subjects about the future of the product or to emphasize that I was just expressing my personal opinion.

This thread was my own idea and was created as a public service, and not as part of my position at Check Point.

So just in case, this wasn't clear before - everything I say here is just my own personal opinion. I work in the product organization (R&D and QA) and have zero impact (and often very little knowledge) of the future of the product in terms of features and dates (unless it's stuff that I'm already actively working on).

(my suggestions and opinion are taken into account, but I don't decide what ends up on the list or its position on the prioritization list).

I sometimes have to walk a fine line between what I know and what I can say, and talking about dates and ETAs is taking a journey into dangerous territory for me 🙂

HTH

 Yonatan

0 Kudos
Reply
Highlighted
Explorer

thanks Yonatan, 

regards

0 Kudos
Reply
Highlighted
Participant

Hello Yonatan,

Execute cp_log_expoter , the syslog server does not receive any log .

check log_index.elg :

[log_indexer 13179 53189520]@C4600[14 Aug 22:02:15] Files read rate [log] : Current=7 Avg=139 MinAvg=10 Total=330956 buffers (0/0/0/0)

[log_indexer 13179 53189520]@C4600[14 Aug 22:02:15] Sent current: 0 average: 0 total: 0

[log_indexer 13179 53189520]@C4600[14 Aug 22:02:20] Files read rate [log] : Current=13 Avg=139 MinAvg=10 Total=331020 buffers (0/0/0/0)

[log_indexer 13179 53189520]@C4600[14 Aug 22:02:20] Sent current: 0 average: 0 total: 0

[Expert@C4600:0]# cp_log_export status

name: tolog
status: Running (13179)
last log read at: 14 Aug 21:48:29
debug file: /opt/CPsuite-R77/fw1/log_exporter/targets/tolog/log/log_indexer.elg

[Expert@C4600:0]# cp_log_export show

name: tolog
enabled: true
target-server: 192.168.x.x
target-port: 514
protocol: udp
format: syslog

my  gateway and managment  on the same device. version: R77.30 upgrade to Check_Point_R77_30_JUMBO_HF_1_Bundle_T302_FULL

0 Kudos
Reply
Employee+
Employee+

Hello Su,

From your log (Current=13 Avg=139 MinAvg=10 Total=331020 ) as well as the status command it appears that logs are being exported.

If you want to actually see this you can use tcpdump command: 'tcpdump port 514 -A -s0' (if you are using port 514 for anything else, you can add other qualifiers to narrow down the output).

This will show you the actual data being exported in a readable format. For example:

[Expert@ypsa:0]# tcpdump port 514 -A -s0
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
14:25:25.094828 IP ypsa.47206 > XX.XX.XX.XX.syslog: SYSLOG local0.info, length: 1044
E..0..@.@..xd P...f......<134>1 2018-08-14T14:25:23Z ypsa CheckPoint 17857 - [action:"Accept"; ifdir:"inbound"; ifname:"eth0";  [deleted the payload] product:"VPN-1 & FireWall-1"; proto:"6"; s_port:"35700"; service:"22"; service_id:"ssh"; src:"XX.XX.XX.XX"; ]

1 packets captured
2 packets received by filter
0 packets dropped by kernel
[Expert@ypsa:0]#

(I deleted most of the payload since it just takes up space and not really relevant for this example - I just wanted to show that you can see and read the actual logs as they are being exported)

Since it looks like your logs are actually being exported, I would focus on the other end and try to see if it's being received and parsed correctly.

Use tcpdump or Wireshark on the other end. If it's not there, it's a connectivity issue, and if it's there it's probably a parsing issue.

HTH 

 Yonatan 

0 Kudos
Reply