Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Yonatan_Philip
Employee Alumnus
Employee Alumnus

Log Exporter guide

Hello All,

We have recently released the Log Exporter solution.
A few posts have already gone up and the full documentation can be found at sk122323.

However, I've received a few questions both on and offline and decided to create a sort of log exporter guide.

But before I begin I’d like to point out that I’m not a Checkpoint spokesperson, nor is this an official checkpoint thread.

I was part of the Log Exporter team and am creating this post as a public service.

I’ll try to only focus on the current release, and please remember anything I might say regarding future releases is not binding or guaranteed.
Partly because I’m not the one who makes those decisions, and partly because priorities will shift based on customer feedback, resource limitations and a dozen other factors. The current plans and the current roadmap is likely to drastically change over time.

And just for the fun of it, I’ll mostly use the question-answer format in this post (simply because I like it and it’s convenient).

 

Log Exporter – what is it?

Performance

Filters

Filters: Example 1

Filters: Example 2

Gosh darn it, I forgot something! (I'll edit and fill this in later)

Feature request

146 Replies
Leon_Franken
Explorer

Is it possible to add an origin-id like Cisco does?

logging origin-id { hostname | ip | ipv6 | string user-defined-id }

Yonatan_Philip
Employee Alumnus
Employee Alumnus

Hi,

It's not possible in the current implementation but is a good enhancement idea. 

I'll open an RFE for this - thanks!

Yonatan

0 Kudos
hutinop
Participant

Very useful information on how to filter the data.

We are in the deployment phase and we need to assess the size of the server that will receive the logs.

@Yonatan_Philip 

Do you know how we can estimate the log flow (size of the log / day or number of logs)? And which metrics are playing a role in the number of logs produced (like flow through the checkpoint equipment, number of users, etc)?

Many thanks in advance for your reply!

 

Yonatan_Philip
Employee Alumnus
Employee Alumnus

Gosh darn it, I forgot something! (I'll edit and fill this in later)

Edit: I want to talk about callback functions. 

This feature was tested and is officially supported, however, we felt it was better to limit its exposure, which is why it's not present in the sk, and why I didn't mention it previously.

This feature was added reluctantly and only because the benefits it added in usability outstripped the potential detrimental impact.

It's a flexible and powerful tool which allowed us to add support for various use cases. 

Basically, we added some predefined functions for value manipulation within the mapping and definition files. This feature already exists in the mapping files, and anyone who digs around will see it and can figure this out on their own, we just didn't document it officially to discourage users from making changes.

There are two main reasons to limit the exposure - first off, this is somewhat of a wildcard and we are fairly certain that if you try hard, you can probably find a way to break the product with the callback functions, and second they have an impact on performance.

Each usage has a very small impact, but they can stack up rapidly with indiscriminate use. Especially if you have multiple callbacks per logs, with thousands of logs per second.

so while I'm going to give a short review of the callback functions, I wouldn't recommend using them indiscriminately.

The callback functions that we added are as follows:

  • replace_value: Swap values based on a key:value chart. We use this to map the checkpoint severity (and other fields) to 3'rd party severity (CEF, LEEF,etc.)
    Example:
    <field><origName>app_risk</origName><dstName>cp_app_risk</dstName>
      <callback>
          <name>replace_value</name>
          <args>
              <arg key="default" value="Unknown"/>
              <arg key="0" value="Unknown"/>
              <arg key="1" value="Low"/>
              <arg key="2" value="Low"/>
              <arg key="3" value="Medium"/>
              <arg key="4" value="High"/>
              <arg key="5" value="Very-High"/>
          </args>
      </callback>
    </field>

  • append_string: Append a string to the end of a string. We used this to transform the Linux time from seconds (checkpoint format) to milliseconds (CEF format)
    Example:
    <field><origName>time</origName><dstName>rt</dstName>
      <callback>
          <name>append_string</name>
          <args>
              <arg key="append" value="000"/>
          </args>
      </callback>
    </field>

  • format_timestamp: Change the time format. We used this to transform the time from Linux time (checkpoint format) to human readable (syslog format).
    Example:

    <callback>
    <name>format_timestamp</name>
        <args>
            <arg key="format" value="MY CUSTOM FORMAT"/>
        </args>
    </callback>

    The default format if none is used is: "%Y-%m-%dT%H:%M:%SZ"

Hope this post is helpful, and please be mindful of your usage of callback functions.

Yonatan_Philip
Employee Alumnus
Employee Alumnus

Feature request

So, did we miss anything?

(of course we did! This was just the first release, we still have lots of features we want to add...)

Feel free to reply to this post with any suggestions and requests you have for future releases of this solution.

We take customer feedback very seriously, and it can often help us prioritize some features over others.

It will also help us if you can try to categorize your requests by importance - how urgent and/or important each request is?

Please remember that we can only do so many things at any given time, and so correctly prioritizing features is always a challenge.

I truly hope you found this (long) post helpful. 

If you have any other questions leave a comment and I'll try to address it.

Regards,

 Yonatan 

Wes_Belt
Employee Alumnus
Employee Alumnus

Yonatan, can you confirm if sending URL filtering logs is currently supported (each URL accessed by a user) or if this is a roadmap item?

Yonatan_Philip
Employee Alumnus
Employee Alumnus

Hi,

URL Filterterling logs (like all other blade logs) are supported in the current (and future) release of the log exporter.

Just to give some context - the log exporter 'works' on the local *.log & *.adtlog files (such as fw.log & fw.adtlog) in the target folder (the default being $FWDIR/log) - note that on MDS environment the $FWDIR variables changes based on your mdsenv value).
Any log that is in your log files - which include audit logs and the logs from all blades - will be exported.

Edit: moved the part about audit logs to its own answer.

HTH

Yonatan  

Wes_Belt
Employee Alumnus
Employee Alumnus

Thanks Yonatan Philip that clarification is helpful.  I will spend a little time in the lab today to see if I can figure out a filter that will isolate just URL filtering log entries.  Thanks for confirming we were on the right track!

MiteshAgrawal15
Participant

Hi @Yonatan_Philip ,

Thanks for sharing this information.

We are facing issues in getting audit logs from Checkpoint R80.10. We have MicroFocus ArcSight in our environment.

We are receiving the traffic logs without any issues and the same is getting parsed properly. But the audit logs which we receive in "SmartConsole" is parsed as "Log" and nothing much is captured in raw event. At the same time from other firewall which are on R80.20 we are receiving "Log in", " Log Out", "Modify Rule" etc events and username and other details are captured.

 

Please help in rectifying the issue. What configuration we need to use in order to receive audit logs from R80.10 using Log exporter solution.

TIA

Regards,

Mitesh Agrawal

0 Kudos
Paul_Hagyard
Advisor

Hi,

I'm using a custom fields mapping XML file and exporting to syslog. I'm wondering if there is a mechanism for printing every column in the mapping file even if the specific log record has no data - i.e. printing an empty column.

At present I get some IPS events (which still show as SmartDefense...) that don't have an action entry, so the data from the next available field (confidence_level in this case) is being put in the same column as other log entries actions:

<134>1 2018-07-04T13:20:56Z fwmgr CheckPoint 24289 - [time:"1530710456" origin:"0.0.0.0" originsicname:"CN=fw,O=fwmgr..eknpyv" product:"SmartDefense" confidence_level:"3" severity:"1" protection_type:"IPS"
<134>1 2018-07-04T20:13:23Z fwmgr CheckPoint 24289 - [time:"1530735203" origin:"192.168.2.1" originsicname:"CN=fw,O=fwmgr..eknpyv" product:"Threat Emulation" action:"Accept" confidence_level:"0" severity:"0"

So for the first row above, there should be a blank colum for the action, so that the confidence_level is aligned with the others.

Cheers,

Paul

0 Kudos
Yonatan_Philip
Employee Alumnus
Employee Alumnus

Hi Paul,

Unfortunately, this option doesn't currently exist, but this does sound like a worthwhile enhancement.

I'll pass this along to the relevant party.

Although, if I'm thinking of regex it might be better to leave always export the action field, and just have the value be NULL or something along those lines. That will still keep it aligned but also enable the use of regex.

P.S - IPS is called 'SmartDefense' because we export the raw logs. IPS is actually the display name, while SmartDefense is the actual field name (legacy name...).

All fields have an actual name and a display name that you use in the GUI. In most cases, they are very similar, but in some cases, such as where the blade name changed over time the changes are more drastic.

 

0 Kudos
G_W_Albrecht
Legend Legend
Legend

Thank you for this discussion - very helpfull !

CCSP - CCSE / CCTE / CTPS / CCME / CCSM Elite / SMB Specialist
Maarten_Sjouw
Champion
Champion

Hi Yonatan, 

Is this good looking tool going to replace the LEA connection in the near future?

Regards, Maarten

Regards, Maarten
Yonatan_Philip
Employee Alumnus
Employee Alumnus

Hello Maarten,

I'm not entirely sure what you mean by that, but I'll say that while this tool does open up some new possibilities, and we might take advantage of them in the future, I'm not aware of anything along those lines that is planned for the near future.

But keep in mind that I already mentioned I'm not the one who makes those decisions, so it's possible that something is in the road map and I'm just not aware of it.

We do have several improvements and features that we wish to add to this tool and they will probably be done first (we've already started working on some things, but nothing I can share).

Regads,

 Yonatan 

0 Kudos
Maarten_Sjouw
Champion
Champion

What I mean by that is that the LEA (OPSEC) connection currently is used to send instant log information to other systems that are mainly used for SIEM solutions. As I have seen some entries on LogEport mentioning similar functionality, but with additional lfiltering on top, it looks to me all the info getting to the log can be exported as well.

Currently the LEA connection is missing out some of the information, some log-in records ie and also not all fields seem to reach the external system.

So that's what I mean by it.

Regards, Maarten.

Regards, Maarten
0 Kudos
PhoneBoy
Admin
Admin

I believe the plan is to build new functionality/enhancements into the Log Exporter tool versus extend LEA. 

That doesn't mean LEA goes away, but different and more integrations will be possible with Log Exporter over time.

Maarten_Sjouw
Champion
Champion

One of the problems currently with LEA is that when you want to setup a new connection at this moment, this connection will be using a SHA256 certificate and most SIEM systems did not update the LEA client to support the 256 certificates.

This is a reason why currently very few LEA connections will be used any longer for setting up a new connection.

Regards, Maarten
0 Kudos
Yonatan_Philip
Employee Alumnus
Employee Alumnus

I was recently asked about JSON support for the log exporter, and my initial reaction was that it isn't supported in the current release. 

However, after looking into this a bit more, I found that although some basic manipulation of the configuration files I'm able to reformat the logs into a format that passes JSON validators.

However, just because the output passes validators doesn't actually make it useful.

It's formatted as json, but a very basic and simplified form of json.
An example log would look like this:

{"action":"Drop", "ifdir":"inbound", "ifname":"eth1", "loguid":"{0x0,0x0,0x0,0x0}", "origin":"192.168.32.91",  "time":"1522592600", "version":"5", "dst":"10.32.30.255", "message_info":"Address spoofing", "product":"VPN-1 & FireWall-1", "proto":"17", "s_port":"137", "service":"137", "src":"10.32.30.20", "":""}

One thing that should be noted is that we do have duplicate keys, and while this is compliant with the RFC, it is not recommended (but can't be helped with the current release). 

I was wondering if anyone has a use case where such json output would be useful, and if so what specifically are the requirements? does it just have to just be json compliant or is there anything specific that you'd be looking for?

For those that are interested here is the configuration I used to transformat the log to a json format:

<start_message_body>{"</start_message_body>
<end_message_body>":""}</end_message_body>
<message_separator>&#10;</message_separator><!-- &#10;=='\n' -->
<fields_separatator>, "</fields_separatator>
<field_value_separatator>":</field_value_separatator>
<value_encapsulation_start>&quot;</value_encapsulation_start>
<value_encapsulation_end>&quot;</value_encapsulation_end>

Note: It adds an empty "":"" at the end of the log for convenience.

I'd be interested to hear feedback about json related use cases.

Regards,

 Yonatan  

0 Kudos
Vladimir
Champion
Champion

Without thinking on this subject at length, one of the possible use cases will be integration with cloud services native logging and enforcement systems. We could, conceivably, output the json formatted events to the AWS CloudWatch to have native integrated metrics, alarms etc..

0 Kudos
Yonatan_Philip
Employee Alumnus
Employee Alumnus

But would any of those applications actually be able to make sense of the data?

Just because it can officially be called 'json' doesn't mean it's can actually be parsed into useful data. That's what I meant when I asked if there are any requirements other than just being 'json'?

0 Kudos
Vladimir
Champion
Champion

It looks like in AWS' case, they do not much care what data you are logging to them, so long as it is in JSON:

Amazon CloudWatch Logs JSON Log Format Support 

You pretty much defining filters in CloudWatch yourself to look for patterns in your logs and define metrics, alerts and events based on those:

Filter and Pattern Syntax - Amazon CloudWatch Logs 

0 Kudos
Simon_Taylor
Contributor

Hi Yonatan Philip‌, 

I have a JSON requirement. I'd like to output the messages in cee-enhanced format in order for me to parse the message using rsyslog's mmjsonparse. On my rsyslog server I reformat the message into LEEF, CEF or JSON depending on the tool I'm forwarding too (in most cases all three) and ship it on.

Using your example above I inserted @cee: and some of the messages are successfully parsed:

@cee: {"action":"Reject", "flags":"2304", "ifdir":"inbound", "ifname":"daemon", "loguid":"{0x0,0x0,0x0,0x0}", "origin":"10.6.41.170", "time":"1530153441", "version":"1", "dst":"172.30.235.66", "encryption_failure:":"no response from peer.", "fw_subproduct":"VPN-1", "peer_gateway":"10.12.130.249", "proto":"6", "reject_category":"IKE failure", "rule":"0", "s_port":"46792", "scheme:":"IKE", "service":"18192", "src":"10.6.41.196", "vpn_feature_name":"IKE", "":""}

but others are not, probably due to the part in red:

@cee: {"action":"Accept", "flags":"51460", "ifdir":"outbound", "ifname":"eth0", "loguid":"{0x5b3466e3,0x5f4b0001,0x612906c4,0x7b6}", "origin":"10.6.41.97", "time":"1530160867", "version":"1", "__policy_id_tag":"product=VPN-1 & FireWall-1[db_tag={1AF555CB-2329-464E-986C-7D2991E1C63A};mgmt=sma-nscma001;date=1481830965;policy_name=Development_ISG_current\]", "dst":"10.6.41.171", "message_info":"Implied rule", "nat_addtnl_rulenum":"0", "nat_rulenum":"0", "product":"VPN-1 & FireWall-1", "proto":"17", "rule":"0", "s_port":"123", "service":"123", "service_id":"ntp-udp", "smartdefense_profile":"No Protection", "src":"10.6.41.97", "xlatedport":"0", "xlatedst":"0.0.0.0", "xlatesport":"10135", "xlatesrc":"10.6.41.99", "":""}

If you can see a better way to achieve this then I'd be interested to learn in.

Additionally, can you elaborate on which keys are duplicated?

Cheers,

Simon

0 Kudos
Yonatan_Philip
Employee Alumnus
Employee Alumnus

Hello Simon,

This is just a guess, but it looks to me as if the issue is probably the square brackets inside the log - that is inside the value fields (as part of the log data).

The way to resolve it is by escaping. Inside the format definition file, you have a section for escaping which looks like this:

<escape_chars>
<char><orig>\</orig> <!--MUST be first to prevent double escaping --> <escaped>\\</escaped></char>
<char><orig>"</orig><escaped>\"</escaped></char>
<char><orig>]</orig><escaped>\]</escaped></char>
<char><orig>&#10;</orig><escaped> </escaped></char>
</escape_chars>

(might have different values depending on the format you're using).

If you notice in the example above we escape the closing square brackets (]) with a backslash (\]), but not the opening square brackets. I can see the same thing in your log - which effectively means you opened the brackets and never closed it.

I'm sitting here scratching my head trying to remember if we did this on purpose or if this is an oversight/bug.

If you could test this and let me know if it helped I'd appreciate it, and in the meantime, I'll ask someone to double check if this is a bug.

Looking at the cee-enhanced documentation you might have to escape the curly brackets as well '{' '}'.

Note that some applications ignore backslash escaping - in such cases instead of escaping with a backslash, simply replace the bracket with another character of your choosing. Bear in mind that this changes the actual values, so if you pipe the values and/or parse them this needs to be taken into account.

Regarding duplicate values - there are several scenarios where this can happen, the easiest way to see an example is to create another layer (on an R80.10 GW).

The log will hold duplicate keys for many values such as rule name, layer name etc.

Edit: I checked, and escaping only the ending bracket was on purpose. We wished to minimize changes to the payload to the absolute minimum. We had to escape the closing bracket to avoid log corruption, but the opening bracket didn't have the same effect.

However, since it seems that this might impact an external (3'rd party) log parser, I'll open an RFE to update the default settings to escape both the opening and closing brackets.

HTH 

 Yonatan 

Simon_Taylor
Contributor

Hi Yonatan Philip‌,

After much head scratching today I finally got this to parse via mmjsonparse Smiley Happy

As you suggested I escaped/replaced the square brackets. My syslog.xml contains:

<start_message_body>@cee: {"</start_message_body>
<end_message_body>localinfo":"myceelog"}</end_message_body>
<message_separator>&#10;</message_separator><!-- &#10;=='\n' -->
<fields_separatator>,"</fields_separatator>
<field_value_separatator>":</field_value_separatator>
<value_encapsulation_start>&quot;</value_encapsulation_start>
<value_encapsulation_end>&quot;</value_encapsulation_end>
<escape_chars>
   <!--MUST be first to prevent double escaping -->
   <char><orig>\</orig><escaped>\\</escaped></char>
   <char><orig>"</orig><escaped>_</escaped></char>
   <char><orig>[</orig><escaped>_</escaped></char>
   <char><orig>]</orig><escaped>_</escaped></char>
   <char><orig>&#10;</orig><escaped></escaped></char>
</escape_chars>

The main obstacle was the message format, I had to force the timestamp to comply with RFC 5424 - The Syslog Protocol timestamp (I'm on R77.30 Mgmt perhaps that's why?):

<name>format_timestamp</name>
   <args>
      <arg key="format" value="%Y-%m-%dT%H:%M:%SZ"/>
   </args>

Interesting bits of information are:

  • some keys contain ":" but this didnt effect the jsonparser as I thought it might.
  • "{" "}" "=" ":" ";"  - all of which appear in the 'values' strings did not require escaping.

It would still be very useful if cee was a supported output format for future versions. For example, the key "policy_id_tag" contains a list for the value:

"__policy_id_tag": "product=VPN-1 & FireWall-1_db_tag={1E295337-3B29-FD42-8155-FC9FC0CE93D6};mgmt=my_cma_name;date=1530151258;policy_name=Standard_",

which would be better displayed as:

"__policy_id_tag": { "product": "VPN-1 & FireWall-1", "db_tag": "{1E295337-3B29-FD42-8155-FC9FC0CE93D6}", "mgmt": "my_cma_name", "date":"1530151258", "policy_name":"Standard" },

Thanks again for the quick support. Looking forward to working with this product much more in the coming weeks!

Cheers,

Simon

Yonatan_Philip
Employee Alumnus
Employee Alumnus

Hi Simon,

I'm glad to hear that you got it working!

Regarding the timestamp - we did indeed have a bug with the timestamp not being RFC compliant.

We actually released a minor update a few days ago with a few critical bug fixes - the two main bugs fixed were the time stampe issue and the default insatllation folder on R77.30 MDS servers.

Could you please verify if you're running the newer take of the log exporter?

If you're running the new take and still seing an issue with the time stamp format please let me know.
When you say that we should support cee I'm assuiming you mean add json format support?

We've gotten this request a few times and have it on the roadmap, but as far as I know it's somewhat down the list.

Although the numebr of such requests is growing so this might change.

HTH 

 Yonatan

0 Kudos
Simon_Taylor
Contributor

Hi Yonatan Philip,  

I’m using the T25 release.

And yes support for JSON would be the best option for me.

 

As I use an rsyslog server to aggregate/normalise our logs, the way I see it, once you have JSON you can craft any other format you like.

We run more than one brand of SIEM so by using JSON I can provide logs to each vendor. 

 

Cheers, Simon

0 Kudos
Yonatan_Philip
Employee Alumnus
Employee Alumnus

Regarding the JSON - it's becoming apparent that JSON support along with HTTP post might be something we will have to look into.

I've passed this feedback along to the relevant parties.

Regarding the timestamp issue - I passed it to R&D to investigate. It should have been fixed in take 25, we'll look into this.

0 Kudos
RPdeBeer
Participant

Hi,

Is it possible to have the CMA's IP address as origin IP instead of the MDS IP?

Greetings,

R.P. de Beer

Yonatan_Philip
Employee Alumnus
Employee Alumnus

Hello Rutger-Paul,

We are aware of this issue. It was a limitation on the first release.

Because domains are actually aliases (on the network level) the exporter sends out the logs with the server’s IP which is the MDS IP.

This is addressed in R80.20, and we will possibly also address this in the next release for R80.10 as well.

Regards,

 Yonatan 

Biju_Nair
Contributor

yes i am also facing the same issue, the IP address shown in Arcsight server(syslog) is the MDS IP and not the CMA IP.

I was in a impression that i did something wrong while configuring  it. Because of the mdsenv does not show that you entered into the CMA. however i was able to find the config.xml file in the "target" directory showing that the new target was created in teh correct location.

Please do let us know if this is resolved in which Take of 80.10

(I am currently running T103 of 80.10)

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events