Via Check Point Support you get a Syslog exporter for SIEM applications for R80.10 Managment. Which allows an easy and secure method for exporting CP logs over syslog. Exporting can be done in few standard protocols and formats. Log Exporter supports: Splunk Arcsight RSA LogRhythm QRadar McAfee Log Exporter is a multi-threaded daemon service, running on a log server. Each log that is written on the log server is read by the log exporter daemon, transformed into the desired format and mapping, and then sent to the end target. Installation on R80.10 Jumbo Hotfix Take 56 or higher. Syntax: # cp_log_export add name <name> [domain-server <domain-server>] target-server <target-server> target-port <target-port> protocol <(udp|tcp)> [optional arguments] Command Name Command Description add Deploy a new Check Point logs exporter. set Updates an exporter's configuration. delete Removes an exporter. show Prints an exporter's current configuration. status Shows an exporter's overview status. start Starts an exporter process stop Stops an exporter process. restart Restarts an exporter process. reexport Resets the current position, and re-exports all logs per the configuration. Regards, Heiko
Hello All,We have recently released the Log Exporter solution.A few posts have already gone up and the full documentation can be found at sk122323.However, I've received a few questions both on and offline and decided to create a sort of log exporter guide.But before I begin I’d like to point out that I’m not a Checkpoint spokesperson, nor is this an official checkpoint thread.I was part of the Log Exporter team and am creating this post as a public service.I’ll try to only focus on the current release, and please remember anything I might say regarding future releases is not binding or guaranteed. Partly because I’m not the one who makes those decisions, and partly because priorities will shift based on customer feedback, resource limitations and a dozen other factors. The current plans and the current roadmap is likely to drastically change over time.And just for the fun of it, I’ll mostly use the question-answer format in this post (simply because I like it and it’s convenient). Log Exporter – what is it?PerformanceFiltersFilters: Example 1Filters: Example 2Gosh darn it, I forgot something! (I'll edit and fill this in later)Feature request
So we have access to a SMART-1 Log Server with R80.10 and it is configured only as a logging server, no management server or other blades. Its receiving logs from several CP firewalls into a management server (which we don't have access to) and then these logs get forwarded to the above Smart-1 Logging server which we do have access to.Trying to set up an OPSEC/LEA connection for our SIEM to pull down from the Logging Server. We can create the connection and SIC generated and activated. Trouble is the SIEM is complaining that it cant connect on 18120 to get the cert. We can access 18184 ok via the SIEM and telnet but we get no response from either on port 18120. our CP support engineer told us that because it is only configured as a logging server with no management blade we wont be able to use OPSEC/LEA to pull logs from it and that syslog is the only option. Syslog doesnt work especially well with our SIEM as needs some major parsing to account for the originating sources devices being different from the server our SIEM receives syslogs for (ie the logging server)Does anyone know if OPSEC/LEA is possible in this setup? Our SIEM providers say that this is the standard way most of their other clients retrieve logs form CP products. Just wondered if there is a way to use OPSEC/LEA at all in this scenario or whether we have to live with the PITA syslog option thats not idea for us?Ta
In R80 where did the SAM rules move to? In the R80 guide it states"In the SmartView Monitor toolbar, click the Suspicious Activity Rules button.", however I do not see this option. Did I miss something?
Hi,Is this available in R80, i.e. sending Check Point security logs to 3rd party devices via syslog from the management server? This utility was previously available as a hotfix called CPLogtoSyslog?thx,bob
In this TechTalk, Kfir Dadosh and Oren Koren will demonstrate how to leverage SmartEvent to improve visibility of security events occurring in your Check Point environment!Topics include:Architecture overviewHow to build custom SmartEvent ReportsUpcoming SmartEvent featuresSlides: https://community.checkpoint.com/docs/DOC-2795 Q&A that we did not get to live will be answered as comments below.Video of session below will be visible to CheckMates members who are logged in. Video Link : 6357
Hi everyone,R80.10 SmartEvent has a very capable engine for customized views and reports based on logs & audit logs. The front-end is called SmartView.We want to use this community to share our customized dashboards and reports created with SmartView.Let's have this thread as the main discussion of all custom reports - so that newcomers to SmartEvent will have one place with a repository of custom reports to choose from. I'm thinking of having this thread as the UI-equivalent of the highly popular My Top 3 Check Point CLI commands
Is there a view where I can monitor the throughput traffic/bandwidth of an interface in real time, as well as over a defined period? I'm coming from the SonicWALL world and was looking to see if there is similar functionality. Also, is there a way to pull a report that has the bandwidth usage for a specific interface for a specific time period? Thanks for looking.
Hello,Smart Console R80.10 here. There is what I'd like to do:1. Go to Logs & Monitor2. Use Open Log View to create new Logs tab3. Enter search criteria, for example: blade:ips4. Change tab name from Logs to IPS How do I achieve step 4 ? Also, how do I fix the search scope to be for the last 24 Hours? I do that but next time I start SC and it is reset back to 7 Days. Very inconvenient.Thanx for your comments.
Hello Guys,Does someone know for sure if we can still use OPSEC with Smarcenter in R80.10?We are going to migrate in R80.10 and we are using Splunk to collect Checkpoint logs.I can't find something write down saying how to configure interaction between R80.10 / Splunk. Do we have to use syslog? If yes what is the recommended configuration?Thanks!
Here is a small guide on how to add a new disk >2 TB to your firewall and expand the size of /var/logCheck for if we are running a 64 bit kernel (it is needed for handling >2TB disk sizes)[Expert@firewall:0]# uname -aLinux firewall 2.6.18-92cpx86_64 #1 SMP Sun Jan 21 10:26:26 IST 2018 x86_64 x86_64 x86_64 GNU/LinuxList the disks.. List the disk with fdisk -l or parted -l[Expert@firewall:0]# parted -lModel: Msft Virtual Disk (scsi)Disk /dev/sda: 100GBSector size (logical/physical): 512B/512BPartition Table: msdosNumber Start End Size Type File system Flags 1 32.3kB 313MB 313MB primary ext3 boot 2 313MB 8900MB 8587MB primary linux-swap 3 8900MB 107GB 98.5GB primary lvmModel: Msft Virtual Disk (scsi)Disk /dev/sdb: 34.4GBSector size (logical/physical): 512B/512BPartition Table: msdosNumber Start End Size Type File system Flags 1 65.5kB 34.4GB 34.4GB primary ntfsModel: Msft Virtual Disk (scsi)Disk /dev/sdc: 4295GB <-- THIS IS THE NEW DISK Sector size (logical/physical): 512B/512BPartition Table: gptNumber Start End Size File system Name Flags 1 1049kB 4295GB 4295GB primary lvmNow to the LVM part..Prepare the new disk to be used in LVM using the parted utility[Expert@firewall:0]# parted -s /dev/sdc mklabel gpt[Expert@firewall:0]# parted -s /dev/sdc unit mib mkpart primary 1 100%[Expert@firewall:0]# parted -s /dev/sdc set 1 lvm on[Expert@firewall:0]# # Ask the kernel to re-read the partition table[Expert@firewall:0]# partprobeOne could skip this step of creating a logical volume and just add the "lvm physical disk" created in the next step, but I do it this way to ensure there is information on the disk (about it is used) so when other sysadmins or tools list the disk they can see the partition on the disk instead of a disk that appears empty .. this might stop them from assume it is "free" to use. Creating the LVM disk and add it to the existing volume group[Expert@firewall:0]# # Tag/prepare/reserve the disk so it can be used in the LVM/VG[Expert@firewall:0]# pvcreate /dev/sdc1[Expert@firewall:0]# # Then add the new LVM disk to the volume group [Expert@firewall:0]# vgextend vg_splat /dev/sdc1 Now I will list the current location of /dev/vg_splat/lv_log (that is where the /var/log file system resides) and see where the data is placed on the two disks I now have in the volume group vg_splat.My goal is to have the log file system reside on the new disk only and not on the OS disk..List the current location of the /var/log file system (the lv_log logical volume)[Expert@firewall:0]# lvs -o +devices # use "pvdisplay -m" for more detailed view LV VG Attr LSize ... Devices lv_current vg_splat -wi-ao 20.00G /dev/sda3(0) lv_log vg_splat -wi-ao 63.00G /dev/sda3(640) <- Now we want to move this data to sdc1(The above command shows that lv_log resides on the disk sda partition 3 (/dev/sda3) and we want to move it to the new disk called sdc.)Now lets move the existing /var/log data residing on the same disk as the operative system to speed up the I/O and to ensure we only allocate data for log files on the new disk. We can do this in the background without blocking existing I/O during the move. I would recommend doing this in the background by adding the extra option "--background". That way you could also just disconnect the secure shell session and not need to wait for the command to finish (it could take hours to finish)Move the existing log file system from the system disk to the dedicated logfile disk (shown as a forground process)[Expert@firewall:0]# # NB: I recommend adding the extra option --background to the below command [Expert@firewall:0]# # Move [FROM disk] [TO disk][Expert@firewall:0]# pvmove -n /dev/vg_splat/lv_log /dev/sda3 /dev/sdc1 /dev/sda3: Moved: 0.6% /dev/sda3: Moved: 1.4% ... /dev/sda3: Moved: 100% Then verify that the data has been moved correctly..List the location of the logical volumes again on the PV disks.[Expert@firewall:0]#lvs -o +devices LV VG Attr LSize Origin Snap% Move Log Copy% Devices lv_current vg_splat -wi-ao 20.00G /dev/sda3(0) lv_log vg_splat -wi-ao 63.00G /dev/sdc1(0) <-- Perfect (the above command shows that lv_log only resides on /dev/sdc1 now)Now I want to expand the file system on THE NEW DISK only.TIP:When you expand a filesystem on a logical volume you can utilize all the free space by using "100%FREE" (without the quotation) instead of my example below where I use "+3910G" .. so lets expand the logical volume with /dev/sdc1 as an option.Extend the log file system to utilize the new space[Expert@firewall:0]# lvextend -L +3910G /dev/vg_splat/lv_log /dev/sdc1 Extending logical volume lv_log to 3.88 TB Logical volume lv_log successfully resizedThe we resize the file system to fit the logical volume..Resizing the file system[Expert@firewall:0]# resize2fs /dev/vg_splat/lv_logresize2fs 1.39 (29-May-2006)Filesystem at /dev/vg_splat/lv_log is mounted on /var/log; on-line resizing requiredPerforming an on-line resize of /dev/vg_splat/lv_log to 1041498112 (4k) blocks.The filesystem on /dev/vg_splat/lv_log is now 1041498112 blocks long. Check that data still resides on /dev/sdc1 for lv_logList the LVM / PV location again..[Expert@firewall:0]# lvs -o +devices LV VG Attr LSize Origin Snap% Move Log Copy% Devices lv_current vg_splat -wi-ao 20.00G /dev/sda3(0) lv_log vg_splat -wi-ao 3.88T /dev/sdc1(0) An extra check to see the file system size in human format (-h) Verify that the log file system had been expanded[Expert@firewall:0]# df -h /var/logFilesystem Size Used Avail Use% Mounted on/dev/mapper/vg_splat-lv_log 3.8T 40G 3.6T 2% /var/log An extra check to ensure we can write/read the filesystem..Verify that the system can write to the file system[Expert@firewall:0]# touch /var/log/deleteme && ls -al /var/log/deleteme && rm /var/log/deleteme-rw-rw---- 1 admin users 0 Oct 22 13:42 /var/log/deleteme[Expert@firewall:0]# ls -al /var/log/deletemels: /var/log/deleteme: No such file or directory That's it A "Quickie" to run in expert modeparted -s /dev/sdc mklabel gptparted -s /dev/sdc unit mib mkpart primary 1 100%parted -s /dev/sdc set 1 lvm onpartprobepvcreate /dev/sdc1vgextend vg_splat /dev/sdc1pvmove --background -n /dev/vg_splat/lv_log /dev/sda3 /dev/sdc1 lvextend -L +3910G /dev/vg_splat/lv_log /dev/sdc1 resize2fs /dev/vg_splat/lv_logThe end
Fairly frequently, I see spurious results in the log listing to a simple query in SmartView and SmartConsole's SmartLog function. Specifically, a query with a simple source and destination selected will include a few log entries that do not match the query. The same query will produce this result in both tools. Here's an example. Notice the entry in the selector and detailed at right does not match the simple selection criteria.
I would like to run a query (something like NOT action:drop) on a list of unique IP addresses. I've looked through documentation and tried IP's with a space between, with "AND" (no quote marks) between. Neither worked. Any advice is appreciated.
Hello Everyone,We are currently in advanced stages of developing a Log Exporter update that will add CIM support.This will give us better Splunk integration for CIM oriented apps and dashboards (e.g. Splunk Enterprise Security). We are currently looking for customers who wish to test this new feature (in either their lab or production) and share their feedback with us. I would also really appreciate if in your email you could also add the following details:what version of Check Point do you use? And what version of Splunk server?Is your Splunk environment installed as a single-instance or is it a distributed environment?Have you already tested out previous releases of the Log Exporter or is this your first use of the add-on? The new update will also enable the Log Exporter to work in a semi-unified mode.For those who are unfamiliar with this setting, it means that updates are unified with their original log before they are exported. This makes the information in the update log complete and makes the update log itself more readable (in raw mode you had to manually search for the original log to make sense of the update).Best Regards, Yonatan
Checkpoint VSX log don't filter the virtual system name origin, if i search for destination and/or source i see the gateway name on origin, but if i want use the filter on Origin, i don't find the virtual systemIt's Gaia 80.30What could be the problem?
Pre R80.10 Netflow worked fine.Now on R80.30 I have two flows that are identical -- but one only shows Outbound and the other only shows Inbound BUT -- and this is perplexing -- it is the exact same traffic for both inbound and outbound flows -- i.e. source and destination are the same.Yes.. let that simmer for a while.I have one rule that's configured on the firewall and it's a rule that a lot of web traffic hits on.I'm using ManageEngine's Netflow Analyzer.For this traffic, I would expect there should be one flow and it should include both inbound and outbound traffic on the one interface (the internal interface it's hitting).
Hello All,I'm reviewing sk112814 which explains how to overcome the the following error."SmartView server certificate is invalid" error when opening a new tab in the R80 SmartConsole "logs & monitor" In the solution steps it is said that one should exclude the SmartEvent object from the SSL inspection group, but I haven't found straight forward instructions on how to perform this step online.Any assist with screen shots will be much appreciated.Regards,AdielKobi Eisenkraft
Greetings,This is my first post here. I really enjoy the community, which posts help me to fix some issues that i was facing.we have a smartevent server (SMS A) which store logs from installed customers gateways.we project to move systems configuration and logs from the SMS A to the newly installed SMS B but my worry is about exporting logs.how can i easily realized it?
Hi Everyone, I have a Smart-1 5150 device that manage 90 checkpoint gateway. I want to integrated it with LogRhythm SIEM.I was create a host object for LogRhythm SIEM with it IP.I was create a OPSEC Application for it and also pull certificates from Check Point Smart-1 devices.Now i need to provide the information below on LogRhythm SIEM :opsec_sic_name "OPSEC_APP_SIC_DN"lea_server ip IP_ADDRESSlea_server auth_port 18184lea_server auth_type sslcalea_server opsec_entity_sic_name "LOG_SERVER_DN"opsec_sslca_file "C:\checkpoint_config\opsec.p12" "OPSEC_APP_SIC_DN" is the DN name in OPSEC Application which is "CN=LogRhythm-XM,O=CP-Smart1..ksmkv" in my picture. Is this corect ?"lea_server auth_type" is sslca. Is this only 1 type is sslca or any orther type ?"LOG_SERVER_DN" i not sure where to collect this infor ? i going to the web portal of Smart-1 device and see the DN in Certificate Authority tab as below :is this the right DN for "LOG_SERVER_DN". Since Smart-1 devices í manage all orther firewall, the "LOG_SERVER_DN" is the DN of Smart01 device, right ? Cause after configure, i still can't receive any log on LogRhythm SIEM about Check Point OPSEC. Please help me solve this issue. Thanks!
We are running R80 MDS and would like to monitor our VSX clusters that are running R77.20 via Solarwinds using SNMP. Has anyone had any success getting the virtual systems monitored? Even after modifying the snmp mode from "default" to "vs" we are unable to poll the virtual system.Could API be used to pull the snmp data?Thanks
Hi Team,I have one EPM server R80.20 and licenses for unlimited Policy Servers. I have attached the central license to EPM server and my query is how do I attach licenses to Policy servers since I have installed 3 Policy servers. Which shows eval licenses only.TIABlason R
Trying to get an R77.30 CMA & CLM working with LEA. Able to pull cert from the CMA w/o issue put getting following errors when launching LEA:store_open: Failed stat: Value too large for defined data typeFailed to open LEA state fileTrying running LEA in DEBUG mode wasn't too helpful either.
Hi,Has anyone encountered this issue with the MUH Identity Awareness Agent running on Citrix servers? Initial connection works just fine but then after a few days it just disconnects and stops forwarding identities. Event log on the server says that it is connected but the agent doesn't report that. Screenshot is attached. There doesn't seem to be an sk relating to this so I'm wondering if it is a bug? It's an R80.10 environment running JHF112 and SC Take 056.TIA,Stu