Showing results for 
Search instead for 
Did you mean: 
Post a Question

Log Exporter guide

Hello All,

We have recently released the Log Exporter solution.
A few posts have already gone up and the full documentation can be found at sk122323.

However, I've received a few questions both on and offline and decided to create a sort of log exporter guide.

But before I begin I’d like to point out that I’m not a Checkpoint spokesperson, nor is this an official checkpoint thread.

I was part of the Log Exporter team and am creating this post as a public service.

I’ll try to only focus on the current release, and please remember anything I might say regarding future releases is not binding or guaranteed.
Partly because I’m not the one who makes those decisions, and partly because priorities will shift based on customer feedback, resource limitations and a dozen other factors. The current plans and the current roadmap is likely to drastically change over time.

And just for the fun of it, I’ll mostly use the question-answer format in this post (simply because I like it and it’s convenient).


Log Exporter – what is it?



Filters: Example 1

Filters: Example 2

Gosh darn it, I forgot something! (I'll edit and fill this in later)

Feature request

109 Replies

Re: Log Exporter guide

Log Exporter – what is it?

   So... what is it?

Check Point "Log Exporter" is an easy and secure method for exporting Check Point logs in few standard protocols and formats.


Which protocols do you support?

UDP and TCP.

We also support secure connections (TLS) using mutual authentication.


What formats do you support?

At this time we support CEF, LEEF, Syslog and also a “generic” format.


Why isn’t field X covered in your CEF mapping?

We developed our CEF mapping (choosing which checkpoint field is mapped to which CEF variable) in collaboration with Micro Focus (the owners of ArcSight), and it represents what we believe to be the best overall coverage of all the major fields from across our blades. However, since there are only a set number of available CEF variable we had to pick and choose which fields we wished to map.
If you feel a specific field should be given higher priority, or if you don’t use some blades and would rather remap those variable to other fields you can simply create a user-defined mapping file that reflects your own preferences.


What’s the deal with LEEF – is it support or not?

Yes… with some caveats.

We are not yet fully LEEF compliant in that the timestamp is sent in epoch (which is not supported by LEEF). We do however have an ongoing collaboration with IBM and they plan to update LEEF to support epoch format as well.
Once they do that we will be LEEF compliant.
Unfortunately, I don’t have access to any of their timetables and don’t know when they are actually going to do this.


I use Splunk, and I didn’t notice CIM in the listed formats – what gives?

We plan to release a Splunk application which will support CIM in a future Log Exporter release.
In the meantime, the Generic format will give good field extraction results.


I can’t find the policy name field. Where is it?

The policy name field doesn’t actually exist as a unique field. It is part of the __policy_id_tag which is not really all that readable. In this release, we are filtering out this field by default, and plan to address this in a future release.

(if you really need this field you can remove the filter from the mapping file, bear in mind you’ll have to somehow parse the field to extract the relevant information.)


What about the domain name field?
Same answer as above.


In my SmartLog each log has a type such as log, alert, control etc.
Where can I find this information?

This information is contained in the flags fields. Unfortunately, it’s not human readable (just a bunch of bits). For now, we are filtering this field out by default and plan to address this in a future release.

Re: Log Exporter guide


What is the CPU usage of the Log Exporter?
What is the memory consumption?

How many logs/second can I export?

Yeah, those are really good questions.

Unfortunately, I don’t have any official answers.

I can give some anecdotal answers and specific examples or comments I’ve heard from customers.


If you tested this in your environment I encourage you to add your results/comments below.


On a Smart-1 405 running R80.10 with CEF format over TCP, we saw ~18.2K logs/sec with a CPU usage of ~115% (1.15 out of 4 cores).

On a different environment running on a Smart-1 410 with CEF format over TLS we saw ~21.5K logs/sec with a CPU usage of ~115%

Another customer who compared the new solution to CPLogToSyslog stated that the new solution used fewer resources – but didn’t go into specifics.


Again – these are anecdotal examples and not official numbers.


Our focus in this release was to make sure the log exporter was not the bottleneck – to make sure it can outperform the indexer.

Re: Log Exporter guide


  • First of you need to know that Checkpoint logs are arranged in a key:value format.
    source: action:drop [key]:[value]
    I’ll use this information in some of my answers.

My SIEM charges me based on storage/throughput – how can I reduce the number of logs I’m sending them?

We know that the access logs (VPN-1 & Firewall-1) comprise the bulk of the logs for most customers. And in many cases, those are not really the interesting logs.
We added the option to filter out those logs. In the targetconfiguration.xml file you can find the filter_out_by_connection parameter. If you set this to true, your access log will not be sent.

That’s great! But I don’t really need the Application Control logs either. I only want the IPS logs to be sent. How can I do that?

Unfortunately, you can’t do that in this release.

But I can see the different blades which are filtered out! What happens if I just add the Application Control blade there? Won’t that work?

It probably will. But this isn’t recommended or supported. It wasn’t tested and could have unexpected results.
This is something we plan to address in future releases, but for now, it’s not supported.

The only thing that really interests me is why my users are blocked. Can I send out just the drop logs?

Unfortunately, in this release, we don’t support value based filters. We can only filter based on the field (key).
Again we plan to address value based filters in a future release.

So what type of filters can you do?

I’m glad you asked! Let’s look at some examples.

Re: Log Exporter guide

Filters: Example 1

The customer wants to get identity awareness logs. He need to save a record going back at least one year, however, he has a storage problem.

We start by looking at a sample IDA log.

Let’s analyze the raw data:


<134>1 2018-03-21 17:25:25 MDS-72 CheckPoint 13752 - [action:"Update"; flags:"150784"; ifdir:"inbound"; logid:"160571424"; loguid:"{0x5ab27965,0x0,0x5b20a8c0,0x7d5707b6}"; origin:""; originsicname:"CN=GW91,O=Domain2_Server..cuggd3"; sequencenum:"1"; time:"1521645925"; version:"5"; auth_method:"Machine Authentication (Active Directory)"; auth_status:"Successful Login"; authentication_trial:"this is a reauthentication for session 9a026bba"; client_name:"Active Directory Query"; client_version:"R80.10"; domain_name:"spec.mgmt"; endpoint_ip:""; identity_src:"AD Query"; identity_type:"machine"; product:"Identity Awareness"; snid:"9a026bba"; src:""; src_machine_group:"All Machines"; src_machine_name:"yonatanad";]

It contains a lot of relevant information, but some of those fields are probably not really relevant. Either because they contain information which is always static in my organization or information which is not IDA related.


If I wanted to boil it down into the relevant information I’d probably end up with something closer to this:


action:"Update"; origin:""; time:"1521645925"; auth_status:"Successful Login"; domain_name:"spec.mgmt"; identity_src:"AD Query"; identity_type:"machine"; snid:"9a026bba"; src:""; src_machine_group:"All Machines"; src_machine_name:"yonatanad";

Went down from 755 bits to around 323 bits, which is a reduction of ~60% in the log size.


So I map out the relevant fields:

action, origin, time, auth_status, domain_name, identity_src, identity_type, snid, src, src_machine_group, src_machine_name.

I then create a user-defined mapping file with those fields and set the exportAllFields parameter to false. Now only fields which appear in my mapping file will be sent (a sort of whitelist approach).


However this isn’t enough, as it will mean that I’ll get many logs from other blades which will be mostly empty - only containing those fields which exist in almost all logs such as action, origin, src, etc.


So if I want to make sure I only get Identity Awareness logs I’ll pick one (or more) of the fields which are unique to the Identity Awareness blade, such as the identity_src or identity_type fields and give those fields the ‘required’ attribute in the mapping file.

Now only logs which contain this field will be sent.


The end result is that the customer has reduced his output to only IDA logs which only contain the bare minimum of what he actually needs.

Re: Log Exporter guide

Filters: Example 2

The customer has a very large deployment and has hundreds of GB of logs per day.

His vendor charges him per bit sent, and the customer is looking to reduce his footprint anyway he can.


He needs all his logs, so filtering out logs is not an option. Instead, he’s looking to reduce the size of each log sent.


The whitelist approach won’t work this time as there are simply too many different fields. Instead, the customer is trying to identify fields which don’t contain relevant information and filter them out (a sort of blacklist approach).


Let’s start with a random sample log.


<134>1 2018-03-21 18:09:26 MDS-72 CheckPoint 13752 - [action:"Accept"; conn_direction:"Outgoing"; flags:"6307840"; ifdir:"inbound"; ifname:"eth1"; logid:"320"; loguid:"{0x5a9fba17,0x0,0x5b20a8c0,0x149c}"; origin:""; originsicname:"CN=GW91,O=Domain2_Server..cuggd3"; sequencenum:"1"; time:"1521648566"; version:"5"; __policy_id_tag:"product=VPN-1 & FireWall-1[db_tag={BACD59B6-0BBC-A544-A5F9-A136152F0B37};mgmt=Domain2_Server;date=1520501096;policy_name=Standard\]"; aggregated_log_count:"242539"; bytes:"17427"; client_inbound_bytes:"5647"; client_inbound_packets:"65"; client_outbound_bytes:"11780"; client_outbound_packets:"62"; connection_count:"121149"; creation_time:"1520417303"; dst:""; duration:"1231263"; hll_key:"6523019790322755370"; inzone:"Internal"; last_hit_time:"1521648533"; layer_name:"Network"; layer_name:"Application"; layer_uuid:"d2787740-4872-4342-a0c1-58470e2d9bef"; layer_uuid:"cdeb4bd1-f11f-4d36-a78f-03cfa317d06d"; match_id:"1"; match_id:"16777219"; parent_rule:"0"; parent_rule:"0"; rule_action:"Accept"; rule_action:"Accept"; rule_name:"Network_Rule_One"; rule_name:"Appi_Cleanup rule"; rule_uid:"51419c04-5fc4-4263-8cca-e5d14f2dcf56"; rule_uid:"5440fb90-dd92-4e6a-8191-8957c279f3a9"; outzone:"External"; packets:"127"; product:"VPN-1 & FireWall-1"; proto:"17"; protocol:"DNS-UDP"; server_inbound_bytes:"11780"; server_inbound_packets:"62"; server_outbound_bytes:"5647"; server_outbound_packets:"65"; service:"53"; service_id:"domain-udp"; sig_id:"12"; src:""; update_count:"2054"; ]

We are starting out at 1549 bits.


Right off the bat, we need to decide if we need the header.

No? Let’s remove it for 55bits (per log).


Next, I notice that the default field separator is semi-colon+space. I can reduce this to just the semi-colon for an extra 53bits.


Now we try to identify fields which don’t interest me. Obviously, this is customer specific but I’d say that these fields can probably safely be removed in most cases:

flags, originsicname, sequencenum, version, __policy_id_tag, layer_uuid, server_inbound_bytes, server_inbound_packets, server_outbound_bytes, server_outbound_packets ( those are duplicates of client_outbound which already exists)

I could be much more aggressive with what I cut out but this is a good start.

Down to 982bits.


Now the next step is something that’s a bit more extreme but is from an actual use case where it was done.


There is no point in sending out client_inbound_packets:"65" which has a large key and small value when I can just as easily send out F11:”65”.

I can create a mapping file on the receiving end (assuming the SIEM supports this) which knows to translate F11 back into the relevant key field.


So I create the relevant mapping file where fields which I want to cut get the ‘<exported>false</exported>’ property, and the rest of the fields will be mapped to relevant alpha-numeric codes.


We end up with:

F1:"Accept";F2:"Outgoing";F3:"inbound";F4:"eth1";F5:"320";F6:"{0x5a9fba17,0x0,0x5b20a8c0,0x149c}";F7:"";F8:"1521648566";F9:"242539";F10:"17427";F11:"5647";F12:"65";F13:"11780";F14:"62";F15:"121149";F16:"1520417303";F17:"";F18:"1231263";F19:"6523019790322755370";F20:"Internal";F21:"1521648533";F22:"Network";F22:"Application";F23:"1";F23:"16777219";F24:"0";F24:"0";F25:"Accept";F25:"Accept";F26:"Network_Rule_One";F26:"Appi_Cleanup rule";F27:"51419c04-5fc4-4263-8cca-e5d14f2dcf56";F27:"5440fb90-dd92-4e6a-8191-8957c279f3a9";F28:"External";F29:"127";F30:"VPN-1 & FireWall-1";F31:"17";F32:"DNS-UDP";F33:"53";F34:"domain-udp";F35:"12";F36:"";F37:"2054";

687bits. We started with 1549bits so a reduction of ~55%


And I can get better results if I’m willing to be fairly aggressive with my field filters.

Edit: As was correctly pointed out, the unit of measurement is not actually bits, but bytes (number of characters as given by a text editor word count). 

Re: Log Exporter guide

Maybe i missed it, but is there any place that a sample of what the mapping configuration file should look like. I don't see it detailed in this post or the SK article.

Re: Log Exporter guide

Hello Aaron,

There are three relevant files.  

The targetConfiguration.xml contain the 'system configuration' such as port, protocol etc. and can also be changed by using the command line flags.

This files also points to the definition and mapping files with the <mappingConfiguration> & <formatHeaderFile> parameters (in case you want to use custom settings).

The definition files (the default files are located in the conf sub-directory) sets the header and logs format - delimiters, operators etc.

The mapping files will describe the field mapping (renaming fields from X to Y) and filtering options.

There are several mapping and definition files included by default (for syslog, CEF, etc.) and you can find them in the target folder. 

We also included a 'demo' mapping file called fieldsMapping.xml which has several examples of how to use some of the options.

The different options for each file are described in the SK.

Hope this helped clarify the issue.




Re: Log Exporter guide

how can i ensure that the "Audit logs" are also sent. is it sent by default.

Audit logs as in creating of new rule or deletion of a rule, creating objects etc.

Re: Log Exporter guide

Hello Biju,

The audit logs are sent by default.

In the targetconfiguration.xml setting file, there is a parameter called log_types.

The line looks like this:


The default is for both logs and audit logs to be sent, but you can change this to only send one or the other.




Re: Log Exporter guide

Yes I saw this and did not change any default settings. Even though the setting are set to default, in my scenario the audit logs are not reaching the syslog server but the access control logs are appearing on the syslog server. I did a wire shark capture in syslog server, but didn’t find any adt logs.

Is there a way to find out if the audit logs are being sent out of the MDS.


Biju Nair

Sent from my iPhone

0 Kudos

Re: Log Exporter guide

Hello Biju,

Edit - was originally part of another answer. I moved it here as it was more relevant to your question.

With regards to audit logs, there could be two 'gotcha' to look out for. if you're using a log server, some of your audit logs will probably be generated on your management server. You need to make sure that you have an exporter deployed on the server which holds the audit log.

For an MDS server some of your audit logs will probably be generated on the MDS level - so you should make sure that you have an exporter deployed on your MDS level.



Re: Log Exporter guide

Some actual examples would go a long way, such as including the files/changes made for your filter examples.

0 Kudos

Re: Log Exporter guide

I installed Log Exporter hotfix on a management server in my lab running R80.10 JHF 112.  I used CPUSE to download and install the Log Exporter hotfix.  Refer to sk122323 Installation section for more details.

Next you need to create your targets with the cp_log_export command.  Refer to sk122323 Basic deployment section for more details.

If you look in $EXPORTERDIR on the management server you will find a sub-directory for each target you created with cp_log_export.  Inside of the target sub-directory will be two files you will be using for filters.


line 2 from the snippet below - <mappingConfiguration> specifies the file that contains all the field definitions for the filter/mapping, can be any filename in that target sub-directory, I used fm-browsing.xml for this example.

line 4 - <exportAllFields> start with true which sends every entry and ignores your <mappingConfiguration> file.  Use the information from those entries to determine which fields you care about then define those in your filter/mapping file specified in <mappingConfiguration>.  Once you are ready to test your filter/mapping, toggle <exportAllFields> to false and restart log export with "cp_log_export restart name <name of target>‍"

      <mappingConfiguration>fm-browsing.xml</mappingConfiguration><!--if empty the fields are sent as is
without renaming-->

      <exportAllFields>true</exportAllFields> <!--in case exportAllFields=true - exported element in fiel
dsMapping.xml is ignored and fields not from fieldsMapping.xml are exported as notMappedField field-->



Information for format of this file is defined sk122323 Advanced configuration post-deployment section in a table labeled "Field Mapping Configuration XML"

Example of the long XML format (more whitespace)

<?xml version="1.0" encoding="utf-8"?>

Example of a more condensed XML format (functions the exact same as above, just different visually and easier to cut and paste IMO)

<?xml version="1.0" encoding="utf-8"?>

Information for format of this file is defined sk122323 Advanced configuration post-deployment section in a table labeled "Field Mapping Configuration XML"

Re: Log Exporter guide

Perfect, thanks Wes!

Re: Log Exporter guide


I have an issue with the "layer_uuid" field, I can not seem to remove it from logs. I have this is my mapping config file:
However, the field still shows up in the logs. I have removed other fields as well, these do not show up in the logs, so the configuration should be correct.


Re: Log Exporter guide

Hello Morten,

I think this might be related to the use of tables.

This is addressed in the sk, but can still often lead to misunderstandings.

From the SK:

<table>Some fields will appear in tables depending on the log format. This information can be found in the elg log - one entry for every new field. A field can appear in multiple tables, each distinct instance is considered as a new field. 


In R80+ we introduced the concepts of tables in the logs. When you wish to work with/on a field you must specify it's exact location if it's in a table. 

To find if a field is in a table you can search for that field in the elg file. For example:

[log_indexer 13874 4076104592]@MDS-72[12 Aug  9:48:22] Read Log Format field name:['layer_uuid']. Field from table:[match_table].

I can see that the 'layer_uuid' is in the 'match_table' table. So to manipulate that field (either change its name or whether or not to export it) I would use something like this:






You can also see an example of this in the example mapping file fieldsMapping.xml.



Re: Log Exporter guide

That worked.


Re: Log Exporter guide

Is it possible to add an origin-id like Cisco does?

logging origin-id { hostname | ip | ipv6 | string user-defined-id }

Re: Log Exporter guide


It's not possible in the current implementation but is a good enhancement idea. 

I'll open an RFE for this - thanks!


0 Kudos

Re: Log Exporter guide

Gosh darn it, I forgot something! (I'll edit and fill this in later)

Edit: I want to talk about callback functions. 

This feature was tested and is officially supported, however, we felt it was better to limit its exposure, which is why it's not present in the sk, and why I didn't mention it previously.

This feature was added reluctantly and only because the benefits it added in usability outstripped the potential detrimental impact.

It's a flexible and powerful tool which allowed us to add support for various use cases. 

Basically, we added some predefined functions for value manipulation within the mapping and definition files. This feature already exists in the mapping files, and anyone who digs around will see it and can figure this out on their own, we just didn't document it officially to discourage users from making changes.

There are two main reasons to limit the exposure - first off, this is somewhat of a wildcard and we are fairly certain that if you try hard, you can probably find a way to break the product with the callback functions, and second they have an impact on performance.

Each usage has a very small impact, but they can stack up rapidly with indiscriminate use. Especially if you have multiple callbacks per logs, with thousands of logs per second.

so while I'm going to give a short review of the callback functions, I wouldn't recommend using them indiscriminately.

The callback functions that we added are as follows:

  • replace_value: Swap values based on a key:value chart. We use this to map the checkpoint severity (and other fields) to 3'rd party severity (CEF, LEEF,etc.)
              <arg key="default" value="Unknown"/>
              <arg key="0" value="Unknown"/>
              <arg key="1" value="Low"/>
              <arg key="2" value="Low"/>
              <arg key="3" value="Medium"/>
              <arg key="4" value="High"/>
              <arg key="5" value="Very-High"/>

  • append_string: Append a string to the end of a string. We used this to transform the Linux time from seconds (checkpoint format) to milliseconds (CEF format)
              <arg key="append" value="000"/>

  • format_timestamp: Change the time format. We used this to transform the time from Linux time (checkpoint format) to human readable (syslog format).

            <arg key="format" value="MY CUSTOM FORMAT"/>

    The default format if none is used is: "%Y-%m-%dT%H:%M:%SZ"

Hope this post is helpful, and please be mindful of your usage of callback functions.

Re: Log Exporter guide

Feature request

So, did we miss anything?

(of course we did! This was just the first release, we still have lots of features we want to add...)

Feel free to reply to this post with any suggestions and requests you have for future releases of this solution.

We take customer feedback very seriously, and it can often help us prioritize some features over others.

It will also help us if you can try to categorize your requests by importance - how urgent and/or important each request is?

Please remember that we can only do so many things at any given time, and so correctly prioritizing features is always a challenge.

I truly hope you found this (long) post helpful. 

If you have any other questions leave a comment and I'll try to address it.




Re: Log Exporter guide

Yonatan, can you confirm if sending URL filtering logs is currently supported (each URL accessed by a user) or if this is a roadmap item?

Re: Log Exporter guide


URL Filterterling logs (like all other blade logs) are supported in the current (and future) release of the log exporter.

Just to give some context - the log exporter 'works' on the local *.log & *.adtlog files (such as fw.log & fw.adtlog) in the target folder (the default being $FWDIR/log) - note that on MDS environment the $FWDIR variables changes based on your mdsenv value).
Any log that is in your log files - which include audit logs and the logs from all blades - will be exported.

Edit: moved the part about audit logs to its own answer.




Re: Log Exporter guide

Thanks Yonatan Philip that clarification is helpful.  I will spend a little time in the lab today to see if I can figure out a filter that will isolate just URL filtering log entries.  Thanks for confirming we were on the right track!

Re: Log Exporter guide


I'm using a custom fields mapping XML file and exporting to syslog. I'm wondering if there is a mechanism for printing every column in the mapping file even if the specific log record has no data - i.e. printing an empty column.

At present I get some IPS events (which still show as SmartDefense...) that don't have an action entry, so the data from the next available field (confidence_level in this case) is being put in the same column as other log entries actions:

<134>1 2018-07-04T13:20:56Z fwmgr CheckPoint 24289 - [time:"1530710456" origin:"" originsicname:"CN=fw,O=fwmgr..eknpyv" product:"SmartDefense" confidence_level:"3" severity:"1" protection_type:"IPS"
<134>1 2018-07-04T20:13:23Z fwmgr CheckPoint 24289 - [time:"1530735203" origin:"" originsicname:"CN=fw,O=fwmgr..eknpyv" product:"Threat Emulation" action:"Accept" confidence_level:"0" severity:"0"

So for the first row above, there should be a blank colum for the action, so that the confidence_level is aligned with the others.



0 Kudos

Re: Log Exporter guide

Hi Paul,

Unfortunately, this option doesn't currently exist, but this does sound like a worthwhile enhancement.

I'll pass this along to the relevant party.

Although, if I'm thinking of regex it might be better to leave always export the action field, and just have the value be NULL or something along those lines. That will still keep it aligned but also enable the use of regex.

P.S - IPS is called 'SmartDefense' because we export the raw logs. IPS is actually the display name, while SmartDefense is the actual field name (legacy name...).

All fields have an actual name and a display name that you use in the GUI. In most cases, they are very similar, but in some cases, such as where the blade name changed over time the changes are more drastic.


0 Kudos

Re: Log Exporter guide

Thank you for this discussion - very helpfull !


Re: Log Exporter guide

Hi Yonatan, 

Is this good looking tool going to replace the LEA connection in the near future?

Regards, Maarten

Regards, Maarten

Re: Log Exporter guide

Hello Maarten,

I'm not entirely sure what you mean by that, but I'll say that while this tool does open up some new possibilities, and we might take advantage of them in the future, I'm not aware of anything along those lines that is planned for the near future.

But keep in mind that I already mentioned I'm not the one who makes those decisions, so it's possible that something is in the road map and I'm just not aware of it.

We do have several improvements and features that we wish to add to this tool and they will probably be done first (we've already started working on some things, but nothing I can share).



0 Kudos