cancel
Showing results for 
Search instead for 
Did you mean: 
Create a Post
General Topics

Have a question and you can't figure out where to post about it after reading All Products and Where to Post About Them? Post it here!

Tsvika_Akerman
inside General Topics 10 hours ago
views 9934 70 15
Employee

R80.40 Early Availability Program @ Check Point Update

      R80.40 EA Program  R80.40 features centralized management control across all networks, on premise or in the cloud, lowering the complexity of managing your security and increasing operational efficiency. As part of the Check Point Infinity architecture, R80.40 provides customers with the best security management, utilizing the Industry’s largest integration of technologies from more than 160 technology partners. With Check Point R80.40 Cyber Security for Gateways and Management, businesses everywhere can easily step up to Gen V.  Enrollment // Production EA     • We are looking for R80.X / R77.X Production environment to evaluate the new version. • Start date: Started    Public EA (for Lab/Sandbox use) is now also available! Log into UserCenter and Select Try Our Products > Early Availability Programs In PartnerMap, it is Learn > Evaluate > Early Availability Programs NOTE: Upgrade from Public EA to GA is not supported   Additional questions? contact us@ EA_SUPPORT@checkpoint.com What's New  IoT Security A new IoT security controller to: Collect IoT devices and traffic attributes from certified IoT discovery engines (currently supports Medigate, CyberMDX, Cynerio, Claroty, Indegy, SAM and Armis).  Configure a new IoT dedicated Policy Layer in policy management. Configure and manage security rules that are based on the IoT devices' attributes.                       TLS Inspection HTTP/2 HTTP/2 is an update to the HTTP protocol. The update provides improvements to speed, efficiency and security and results with a better user experience.  Check Point's Security Gateway now support HTTP/2 and benefits better speed and efficiency while getting full security, with all Threat Prevention and Access Control blades, as well as new protections for the HTTP/2 protocol. Support is for both clear and SSL encrypted traffic and is fully integrated with HTTPS/TLS Inspection capabilities.                       TLS Inspection Layer This was formerly called HTTPS Inspection. Provides these new capabilities: A new Policy Layer in SmartConsole dedicated to TLS Inspection. Different TLS Inspection layers can be used in different policy packages. Sharing of a TLS Inspection layer across multiple policy packages. API for TLS operations. Threat Prevention Overall efficiency enhancement for Threat Prevention processes and updates. Automatic updates to Threat Extraction Engine. Dynamic, Domain and Updatable Objects can now be used in Threat Prevention and TLS Inspection policies. Updatable objects are network objects that represent an external service or a known dynamic list of IP addresses, for example - Office365 / Google / Azure / AWS IP addresses and Geo objects. Anti-Virus now uses SHA-1 and SHA-256 threat indications to block files based on their hashes. Import the new indicators from the SmartConsole Threat Indicators view or the Custom Intelligence Feed CLI. Anti-Virus and SandBlast Threat Emulation now support inspection of e-mail traffic over the POP3 protocol, as well as improved inspection of e-mail traffic over the IMAP protocol. Anti-Virus and SandBlast Threat Emulation now use the newly introduced SSH inspection feature to inspect files transferred over the SCP and SFTP protocols. Anti-Virus and SandBlast Threat Emulation now provide an improved support for SMBv3 inspection (3.0, 3.0.2, 3.1.1), which includes inspection of multi-channel connections. Check Point is now the only vendor to support inspection of a file transfer through multiple channels (a feature that is on-by-default in all Windows environments). This allows customers to stay secure while working with this performance enhancing feature. Access Control Identity Awareness Support for Captive Portal integration with SAML 2.0 and third party Identity Providers. Support for Identity Broker for scalable and granular sharing of identity information between PDPs, as well as cross-domain sharing.  Enhancements to Terminal Servers Agent for better scaling and compatibility. IPsec VPN Configure different VPN encryption domains on a Security Gateway that is a member of multiple VPN communities. This provides:  Improved privacy - Internal networks are not disclosed in IKE protocol negotiations. Improved security and granularity - Specify which networks are accessible in a specified VPN community. Improved interoperability - Simplified route-based VPN definitions (recommended when you work with an empty VPN encryption domain). Create and seamlessly work with a Large Scale VPN (LSV) environment with the help of LSV profiles. URL Filtering Improved scalability and resilience. Extended troubleshooting capabilities. NAT Enhanced NAT port allocation mechanism - on Security Gateways with 6 or more CoreXL Firewall instances, all instances use the same pool of NAT ports, which optimizes the port utilization and reuse. NAT port utilization monitoring in CPView and with SNMP. Voice over IP (VoIP) Multiple CoreXL Firewall instances handle the SIP protocol to enhance performance. Remote Access VPN Use machine certificate to distinguish between corporate and non-corporate assets and to set a policy  enforcing the use of corporate assets only. Enforcement can be pre-logon (device authentication only) or post-logon (device and user authentication). Mobile Access Portal Agent Enhanced Endpoint Security on Demand within the Mobile Access Portal Agent to support all major web browsers. For more information, see sk113410. Security Gateway and Gaia CoreX L and Multi-Queue Support for automatic allocation of CoreXL SNDs and Firewall instances that does not require a Security Gateway reboot. Improved out of the box experience - Security Gateway automatically changes the number of CoreXL SNDs and Firewall instances and the Multi-Queue configuration based on the current traffic load. Clustering Support for Cluster Control Protocol in Unicast mode that eliminates the need for CCP Broadcast or Multicast modes. Cluster Control Protocol encryption is now enabled by default. New ClusterXL mode -Active/Active, which supports Cluster Members in different geographic locations that are located on different subnets and have different IP addresses. Support for ClusterXL Cluster Members that run different software versions. Eliminated the need for MAC Magic configuration when several clusters are connected to the same subnet. VSX Support for VSX upgrade with CPUSE in Gaia Portal. Support for Active Up mode in VSLS. Support for CPView statistical reports for each Virtual System Zero Touch A simple Plug & Play setup process for installing an appliance - eliminating the need for technical expertise and having to connect to the appliance for initial configuration. Gaia REST API Gaia REST API provides a new way to read and send information to servers that run Gaia Operating System. See sk143612. Advanced Routing Enhancements to OSPF and BGP allow to reset and restart OSPF neighboring for each CoreXL Firewall instance without the need to restart the routed daemon. Enhancing route refresh for improved handling of BGP routing inconsistencies. New kernel capabilities Upgraded Linux kernel New partitioning system (gpt): Supports more than 2TB physical/logical drives Faster file system (xfs) Supporting larger system storage (up to 48TB tested) I/O related performance improvements Multi-Queue: Full Gaia Clish support for Multi-Queue commands Automatic "on by default" configuration SMB v2/3 mount support in Mobile Access blade Added NFSv4 (client) support (NFS v4.2 is the default NFS version used) Support of new system tools for debugging, monitoring and configuring the system   CloudGuard Controller Performance enhancements for connections to external Data Centers. Integration with VMware NSX-T. Support for additional API commands to create and edit Data Center Server objects. Security Management Multi-Domain Server Back up and restore an individual Domain Management Server on a Multi-Domain Server. Migrate a Domain Management Server on one Multi-Domain Server to a different Multi-Domain Security Management. Migrate a Security Management Server to become a Domain Management Server on a Multi-Domain Server. Migrate a Domain Management Server to become a Security Management Server. Revert a Domain on a Multi-Domain Server, or a Security Management Server to a previous revision for further editing. SmartTasks and API New Management API authentication method that uses an auto-generated API Key. New Management API commands to create cluster objects. Central Deployment of Jumbo Hotfix Accumulator and Hotfixes from SmartConsole or with an API allows to install or upgrade multiple Security Gateways and Clusters in parallel. SmartTasks - Configure automatic scripts or HTTPS requests triggered by administrator tasks, such as publishing a session or installing a policy. Deployment Central Deployment of Jumbo Hotfix Accumulator and Hotfixes from SmartConsole or with an API allows to install or upgrade multiple Security Gateways and Clusters in parallel. SmartEvent Share SmartView views and reports with other administrators. Log Exporter Export logs filtered according to field values. Endpoint Security Support for BitLocker encryption for Full Disk Encryption. Support for external Certificate Authority certificates for Endpoint Security client authentication and communication with the Endpoint Security Management Server. Support for dynamic size of Endpoint Security Client packages based on the selected features for deployment. Policy can now control level of notifications to end users. Support for Persistent VDI environment in Endpoint Policy Management.    
Kaspars_Zibarts
Kaspars_Zibarts inside General Topics 10 hours ago
views 101 4 1

First impressions R80.30 on gateway - one step forward one (or two back)

Ok, we were finally "forced" to go ahead and upgrade our gateways from R80.10 to R80.30 for fairly small things - we wanted to be ale to use O365 Updatable Object (instead of home grown scripts) and improve Domain (FQDN) object performance issues when all FWK cores were making DNS queries causing a lot of alerts (see https://community.checkpoint.com/t5/General-Topics/Domain-objects-in-R80-10-spamming-DNS/m-p/19786) Positive things - upgrades were smooth and painless - both on regular gateways and VSX. All regular gateways seems to be performing as before, but I have to be honest that they are "over-dimensioned" and having rather powerfull HW for the job - 5900 with 16 cores. VSX though threw couple of surprises. SXL medium path usage. CPU jumped from <30% to above 50% on the busiest VS that only has FW and IA blades enabled. Ok, there is also VPN but only one connection:             I haven't spent enough time digging into it but for some reason 1/3 of all connections took medium path whereas before in R80.10 it was nearly all fully accelerated. And most of it was HTTPS (95%) with next most used LDAP-SSL (2%) I used the SXL fast accelerator feature (thanks @HeikoAnkenbrand  https://community.checkpoint.com/t5/General-Topics/R80-x-Performance-Tuning-Tip-SecureXL-Fast-Accelerator-fw-ctl/td-p/67604) to exclude our proxies and some other nets and you can see that on friday CPU load was reduced by 10% but nowhere near what it used to be. I just find it impossible to explain why would gateway with only FW blade enabled start to to throw all (by the looks of it) traffic via PXL. And statistics are a bit funny too:   FQDN alerts in logs. I can definitely confirm that only one core now is doing DNS lookups (against all DNS server you have defined, in our case 2). But we are still getting a lot of alerts like these: Firewall - Domain resolving error. Check DNS configuration on the gateway (0)             Especially after I enabled updatable object for O365 in the rulebase. As said before - I have not spent too much time on this as we had other "fun" stuff to deal with on our chassis, so it's fairly "raw". I will report more once I had some answers  
TimLofgren
TimLofgren inside General Topics yesterday
views 355 8

Gateway issue post upgrade to R80.30

Hello all,I recently upgraded my SMS server from R80.20 to R80.30 and ran all pending hotfixes after the update.  I took new backups of the config and snapshots of the SMS server after all was done.  I then started the snapshots and backups of the one and only gateway that we utilize.  I ran the upgrade from CPUSE.  After it rebooted the management interface never came back up.  The device was located off site at a data center so I had to go capture it and it is now on my desk.  I can connect to the device CLISH from console but I have no idea how to troubleshoot what went wrong or how to proceed with the upgrade to the R80.30 version after this failure.  Luckily this is not in production yet.  We are still on our old firewall systems until I get this online and working in R80.30.  Can anyone point me in the right direction? Tim
Sunandan_Banerj
inside General Topics yesterday
views 2285 5
Employee

SSH decryption in Check Point R80.20

Hi, Do we support SSH decryption ? If yes pls share URL/Link for reference. If No, do we have any workaround ? Regards, Sunandan
HeikoAnkenbrand
HeikoAnkenbrand inside General Topics yesterday
views 169 5 6

ONELINER - process utilization per core

With this onliner you can view the process load of each core. Change the variable CORE to the correct core number (for example CORE=3):   CORE=3; ps --sort=-c -e -o pid,psr,%cpu,%mem,cmd | grep -E "^[[:space:]][[:digit:]]+[[:space:]]+${CORE}"   ps L -> List all format specifiers (for example pid, psr, %cpu, %mem, ...) There is still much potential for improvement here 🙂
Security_Suppo1
Security_Suppo1 inside General Topics yesterday
views 94 1

Unable to download R80.40

Hi, Has something happened to the download link for R80.40?I have registered for the EA however when continuing through to the download link, it suggests that I havent registered or it can not find the link I am looking for? If anyone can help that would be great.
Biggsy
Biggsy inside General Topics Thursday
views 120 5 1

Move External Interface to different NIC

Gaia 3.10 R80.20 fail-over cluster using ClusterXL on Open Servers (Dell R740).  I need to move the External interface from a 1G copper (eth2) NIC to a 10G SFP+ slot (eth0) utilizing a DAC cable.  Here is my plan so far:Do during a maintenance window as there will be an outage:Via CPUSE, Network Interfaces tab;  Start on Standby member delete IP address from eth2 and disable NIC.  Configure eth0 with settings & IP from eth2, disable auto-negotiation, click on link speed and select 10Gbps / Full and enable NIC.   Then do Active member (which will start my outage).  Disconnect copper cables from eth2 and connect DAC cables.Via SmartConsole, edit FW Cluster object, select Network Management; copy all info from eth2, then delete the eth2 interface.  Then should I :    a.)  Do a "get interfaces with topology"  OR    b.)  Add a new interface and input the info from eth2 by hand. I'm thinking I should do the "get interfaces with topology" just worried it might change some settings or something with the other interfaces....Then, save and push policy.  Or do I need to reboot the gateways first?  Since there isn't a discntd.if file anymore I don't think I need to reboot them.... PLEASE let me know if I'm missing something or if there is a better way to do this. 
Danish_Javed1
Danish_Javed1 inside General Topics Thursday
views 1446 6

Vulnerability Mitigation for TLS 1.0 and Weak Ciphers

Hello,I need  instructions to mitigate the following two vulnerabilities from our Gateways : 1) Enable Support for TLS 1.1 and TLS 1.2 , and disable TLS  1.02) Removal of Weak CiphersWe are using a VSX Cluster environment with R80.10Also, what could be the after effects after removing these vulnerabilities on the existing production environment.Please suggest. Thanks
Carlos_Rodrigu2
Carlos_Rodrigu2 inside General Topics Thursday
views 1458 4

Unable to find any devices of the type needed for this installation type.

Hi guys,After using the ISOmorphic tool to put a R77.30 image on it, I turn on my checkpoint T2200 with the USB and the process begins, after a few seconds a have the following error:I try to select different drivers, but any success with that, can someone help me, because right now my checkpoint is stopped....Thanks in advance
humt
humt inside General Topics Thursday
views 434 13

ISP Compromised - Everything become failure

My ISP has been compromised. And no idea what to do?  ISP has been already compromised few months back but i thought my router is from local company therefore such issue. But i am wrong. I have use the another router and even firewall. All become waste for me. Firewall fail to stop.  I have send the report to Kaspersky. And kaspersky says the problem from router side. And when i search in google. Some developer says it is from ISP side. I have format my system 3 times, reset router , reset firewall. All become failure. 
HeikoAnkenbrand
HeikoAnkenbrand inside General Topics Thursday
views 612 11 16

R80.x Performance Tuning Tip - Elephant Flows (Heavy Connections)

Elephant Flow (Heavy Connections) In computer networking, an elephant flow (heavy connection) is an extremely large in total bytes continuous flow set up by a TCP or other protocol flow measured over a network link. Elephant flows, though not numerous, can occupy a disproportionate share of the total bandwidth over a period of time.  When the observations were made that a small number of flows carry the majority of Internet traffic and the remainder consists of a large number of flows that carry very little Internet traffic (mice flows). All packets associated with that elephant flow must be handled by the same firewall worker core (CoreXL instance). Packets could be dropped by Firewall when CPU cores, on which Firewall runs, are fully utilized. Such packet loss might occur regardless of the connection's type. What typically produces heavy connections: System backups Database backups VMWare sync. Chapter More interesting articles: - R80.x Architecture and Performance Tuning - Link Collection- Article list (Heiko Ankenbrand) Evaluation of heavy connections The big question is, how do you found elephat flows on an R80 gateway? Tip 1Evaluation of heavy connections (epehant flows)A first indication is a high CPU load on a core if all other cores have a normal CPU load. This can be displayed very nicely with "top". Ok, now a core has 100% CPU usage. What can we do now? For this there is a SK105762 to activate "Firewall Priority Queues".  This feature allows the administrator to monitor the heavy connections that consume the most CPU resources without interrupting the normal operation of the Firewall. After enabling this feature, the relevant information is available in CPView Utility. The system saves heavy connection data for the last 24 hours and CPDiag has a matching collector which uploads this data for diagnosis purposes. Heavy connection flow system definition on Check Point gateways: Specific instance CPU is over 60% Suspected connection lasts more than 10s Suspected connection utilizes more than 50% of the total work the instance does. In other words, connection CPU utilization must be > 30%   CLI Commands Tip 2Enable the monitoring of heavy connections. To enable the monitoring of heavy connections that consume high CPU resources: # fw ctl multik prioq 1 # reboot Tip 3Found heavy connection on the gateway with „print_heavy connections“ On the system itself, heavy connection data is accessible using the command: # fw ctl multik print_heavy_conn Tip 4Found heavy connection on the gateway with cpview # cpview                CPU > Top-Connection > InstancesX   Links sk105762 - Firewall Priority Queues in R77.30 / R80.10 and above    
Jay
Jay inside General Topics Thursday
views 1183 5 1

checkpoint OS download

HelloWhen downloading the image for R80.20, we have the following options.i would like to understand use of item 1 below  , what is the difference of it compared with  item 5 and 6.1.Check Point R80.20 with new Gaia 3.10 T5 for Security Gateway 2.Dual Image R80.20 (Take 101) / R80.10 Gaia Clean Install for 6000 appliance 3.ISOmorphic - tool for creating a bootable SecurePlatform/Gaia flash device - Build 166 4.ISOmorphic - tool for creating a bootable SecurePlatform/Gaia flash device - Build 168 5.R80.20 Fresh Install and Upgrade for Security Gateway and Standalone 6.R80.20 Gaia Fresh Install for Security Gateway and Standalone  ThanksJay
HeikoAnkenbrand
HeikoAnkenbrand inside General Topics Wednesday
views 337616 46 162

R80.30 cheat sheet - ClusterXL

Introduction This overview gives you an view of the changes in R80.30 ClusterXL. All R80.10 and R80.20 changes are contained in this command overview (cheat sheet). You could download the cheat sheet at the end of this article as a PDF file. Cheat Sheet Download Download: R80.30 ClusterXL cheat sheet PDF (new R80.30 version) Chapter More interesting articles:- R80.x Architecture and Performance Tuning - Link Collection- Article list (Heiko Ankenbrand)Cheat Sheet:- R80.x cheat sheet - fw monitor - R80.x cheat sheet - ClusterXL  References sk56202 - How to troubleshoot failovers in ClusterXL sk62570 - How to troubleshoot failovers in ClusterXL - Advanced Guide sk92723 - Cluster flapping prevention sk43984 - Interface flapping when cluster interfaces are connected through several switches sk83220 - How to collect ClusterXL debug during boot sk31499 - How to find out the Multicast MAC Addresses that are associated with Cluster Virtual interfaces sk92909 - How to debug ClusterXL to understand why a connection is not synchronized sk55081 - Best practice for manual fail-over in ClusterXL sk92723 - Cluster flapping prevention sk32578 - SecureXL Mechanism sk33781 - Performance analysis for Security Gateway NGX R65 / R7x
Security_Suppo1
Security_Suppo1 inside General Topics Tuesday
views 159 6

Passing GRE traffic

Hello. Can someone advise exactly how Check Point stand with GRE support? I understand they can’t build or terminate GRE tunnels, but can they pass the traffic through? There is a VPN between 2 Cisco Routers who are trying to establish a tunnel however it isn’t coming up. After discussions, I realised they are using GRE over IPSEC VPN.I have now concluded that this is the reason why it’s not coming up. Any suggestions?
HeikoAnkenbrand
HeikoAnkenbrand inside General Topics Tuesday
views 367888 120 317

R80.x Security Gateway Architecture (Logical Packet Flow)

Introduction This document describes the packet flow (partly also connection flows) in a Check Point R80.10 and above with SecureXL and CoreXL, Content Inspection, Stateful inspection, network and port address translation (NAT), MultiCore Virtual Private Network (VPN) functions and forwarding are applied per-packet on the inbound and outbound interfaces of the device. There should be an overview of the basic technologies of a Check Point Firewall. We have also reworked the document several times with Check Point, so that it is now finally available. Chapter More interesting articles:- R80.x Architecture and Performance Tuning - Link Collection- Article list (Heiko Ankenbrand) Flowchart basic (now R80.30+) New R80.30/R80.40 version with new path names, QoS, and QoS fw monitor inspection poins are added.  The following paths for VPN and QoS were only logically drawn in this flowchart. In reality, QoS can be executed in the following paths in CoreXL "FireWall QoS Path" and SecureXL "Accelerated QoS Path". The same is valid for VPN. R80.30 has the following paths here (CoreXL with "F2V" and SecureXL with "Accelerated VPN Path"). Download Download: R80.30 Flowchart v1.6 PDF (new R80.30 version)Download: R80.30 Flowchart v1.5 PDF (old R80.30 version)Download: R80.10 Flowchart v1.4 PDF (old R80.10 version) What's new in R80.10 and above R80.10 and above offer many technical innovations regarding R77. I will look at the following in this article:- new fw monitor inspection points for VPN (e and E)- new MultiCore VPN- UP Manager- Content Awareness (CTNT) R80.20 and above:- SecureXL has been significantly revised in R80.20. It now works in user space. This has also led to some changes in "fw monitor"- There are new fw monitor chain (SecureXL) objects that do not run in the virtual machine.- Now SecureXL works in user space. The SecureXL driver takes a certain amount of kernel memory per core and that was adding up to more kernel memory than Intel/Linux was allowing.- SecureXL supportes now Async SecureXL with Falcon cards- That's new in acceleration high level architecture (SecureXL on Acceleration Card): Streaming over SecureXL, Lite Parsers, Scalable SecureXL, Acceleration stickiness- Policy push acceleration on Falcon cards- Falcon cards for: Low Latency, High Connections Rate, SSL Boost, Deep Inspection Acceleration, Modular Connectivity, Multible Acceleration modules- Falcon card compatible with 5900, 15000 & 23000 Appliance Series > 1G (8x1 GbE), 10G (4x10 GbE) and 40G (2x40 GbE) R80.30 and above:- In R80.30+, you can also allocate a core for management traffic if you have 8 or more cores licensed, but this is not the default.- Active streaming for https with full SNI support. R80.40 and above:- Support for automatic allocation of CoreXL SNDs and Firewall instances that does not require a Security Gateway reboot.- CoreXL and Multi-Queue: Improved out of the box experience - Security Gateway automatically changes the number of CoreXL SNDs and Firewall instances and the Multi-Queue configuration based on the current traffic load.- Check Point's Security Gateway now support HTTP/2- A new Policy Layer in SmartConsole dedicated to TLS Inspection and different TLS Inspection layers can be used in different policy packages.- Enhanced NAT port allocation mechanism - on Security Gateways with 6 or more CoreXL Firewall instances, all instances use the same pool of NAT ports, which optimizes the port utilization and reuse.- Multiple CoreXL Firewall instances handle the SIP protocol to enhance performance.- Cluster Control Protocol encryption is now enabled by default. Flowchart (new in R80.20+) SecureXL has been significantly revised in R80.20. This has also led to some changes in "fw monitor". There are new fw monitor chain (SecureXL) objects that do not run in the virtual machine. Now SecureXL works in part in user space. The SecureXL driver takes a certain amount of kernel memory per core and that was adding up to more kernel memory than Intel/Linux was allowing. The packet flow in R80.20+ is a little bit different from the flow lower than R80.20. Now it is possible to use async SecureXL and other new functions. This figure shows the new features with the reinjection of SecureXL packages. SecureXL supportes now also Async SecureXL with Falcon cards. That's new in acceleration high level architecture (SecureXL on Acceleration Card): Streaming over SecureXL, Lite Parsers, Scalable SecureXL, Acceleration stickiness. Whats new in R80.20/R80.30+: Now there are several SecureXL instances possible. As a result, there are now eight pathes in R80.20/R80.30 and nine in R80.40 instead of six in R80.10. (I will make a drawing with the new paths in the near future). Path R80.10 R80.20 R80.30 R80.40 Firewall Path (F2F - slow path) X X X X F2V (Forward to Virtual Machine)   X X X Accelerated Path (Fast Path) X X X X Accelerated VPN Path X X X X Medium Path (PXL/PSL)* X       Medium Streaming Path PXL/CPASXL*   X X X Inline Streaming Path PSL/CPAS*   X X X  Buffer Path       X     TLS Decrypt       X     TLS Parser       X     HTTP Disp       X     ADVP  - Advanced Patterns       X     WS LITE       X FireWall QoS Path X X X X Accelerated QoS Path X X X X *)  Starting with version R80.20, the medium path is split into two paths "Medium Streaming Path" and "Inline Streaming Path" F2V - Describes "Forward to Virtual Machine" path. F2V stands for "Forward to Virtual Machine" from version R80.20 and above. These packets always belong to an existing connection, which are optimized via the SecureXL path. If a packet needs a new Rulbase look up in the SXL path, it is sent to the F2V path. When the rule base lookup is done, the packet is reinjected into the SXL path (accelerated path). As a result, packets are reinjected with the new SecureXL ID into the correct SecureXL instance again after they have been allowed by access template or rule set. After the packet has been reinjected, the SecureXL ID is added to the SecureXL connetion table and the packet is forwarded to the correct SecureXL instance. Therefore the flow is slightly different to older version before R80.20. This new mechanism also offers the possibility to transfer packets into a new SecureXL instance on Falcon cards. PXL vs. PSLXL - Technology name for combination of SecureXL and PSL. PXL was renamed to PSLXL in R80.20. This is from my point of view the politically correct better term. R80.20 / R80.30 and above: R80.20 SecureXL adds support for offloading on Falcon cards from appliance to acceleration card leaving the appliance to do more. The following flowchart shows the new R80.20/r80.30 offloaded architecture in pink. Host Path - For non acceleration connections (eg. local connections) and connections on non acceleration card interface.     Buffer path - For HTTP requests, HTTP response headers and TLS handshakes.     Inline path - For HTTP response body (until 1st tier match) and TLS bulk encryption/ decryption.   For the new acceleration Falcon card architecture with R80.20+ and SecureXL offloading read this article:R80.x Security Gateway Architecture (Acceleration Card Offloading): VPN Decrypting a packet: R80.10 and R80.20 introduced MultiCore support (it is new in R80 and above) for IPsec VPN. An IPSec packet enters the Security Gateway. The decrypted original packet is forwarded to the connection CoreXL FW instance for FireWall inspection at Pre-Inbound chain "i" from SND. The decrypted inspected packet is sent to the OS Kernel. Encrypting a packet: Encryption information is prepared at Post-Outbound chain "O". The vpnk module on the tunnel CoreXL FW instance gets the packet before encryption at chain "e". The encryption packet is forwarded to the connection CoreXL FW instance for FireWall from SND. The packet is encrypted by vpnk module at chain "E". Afterwards the IPsec packet is sent out on interface. This fw monitor inspection points "e" and "E" are new in R80.10 and "oe" and "OE" are new in R80.20. Note: It's true, they only exist on the outbound side for encrypting packets not for decrypting packets on inbound side. R80.20 VPN+SecureXL and above: (SK151114) Disabling acceleration by running fwaccel off will not have an immediate effect on IPsec acceleration, as it did before R80.20. Using fwaccel off, will cause every existing VPN connection to continue to be processed by the acceleration module (SecureXL), and only new connections will not be offloaded to the acceleration module. As long as there are accelerated VPN connections associated with the IPsec tunnel, all decryption/encryption operations will continue to be handled by the acceleration module. VPN before R80.20, VPN connections could be migrated between acceleration module and Firewall-1 instances due to synchronous communication between those modules. VPN since R80.20, fwaccel off does not stop the SecureXL device, and the communication between SecureXL and firewall-1 is now asynchronous. All connections that were accelerated will continue to be handled by PPAK. Furthermore, when new decryption/encryption keys are generated, the decision whether to accelerate the tunnel or not depends on whether there are accelerated connections associated with the tunnel. As a result, to disable VPN tunnel acceleration all outstanding related connections should be terminated. This behavior prevents disabling acceleration of tunnels as long as accelerated connections are associated with those tunnels. Firewall Core Inbound Stateless Check: The firewall does preliminary “stateless” checks that do not require context in order to decide whether to accept a packet or not. For instance we check that the packet is a valid packet and if the header is compliant with RFC standards. Anti-Spoofing: Anti-Spoofing verifies that the source IP of each packet matches the interface, on which it was encountered. On internal interfaces we only allow packets whose source IP is within the user-defined network topology. On the external interface we allow all source IPs except for ones that belong to internal networks. Connection Setup: A core component of the Check Point R80.x Threat Prevention gateway is the stateful inspection firewall. A stateful firewall tracks the state of network connections in memory to identify other packets belonging to the same connection and to dynamically open connections that belong to the same session. Allowing FTP data connections using the information in the control connection is one such example. Using Check Point INSPECT code the firewall is able to dynamically recognize that the FTP control connection is opening a separate data connection to transfer data. When the client requests that the server generate the back-connection (an FTP PORT command), INSPECT code extracts the port number from the request. Both client and server IP addresses and both port numbers are recorded in an FTP-data pending request list. When the FTP data connection is attempted, the firewall examines the list and verifies that the attempt is in response to a valid request. The list of connections is maintained dynamically, so that only the required FTP ports are opened. SecureXL SecureXL is a software acceleration product installed on Security Gateways. Performance Pack uses SecureXL technology and other innovative network acceleration techniques to deliver wire-speed performance for Security Gateways. SecureXL is implemented either in software or in hardware:       SAM cards on Check Point 21000 appliances       ADP cards on IP Series appliances       Falcon cards (new in R80.20) on different appliances The SecureXL device minimizes the connections that are processed by the INSPECT driver. SecureXL accelerates connections on two ways. New in R80.10: In R80.10 SecureXL adds support for Domain Objects, Dynamic Objects and Time Objects. CoreXL accelerates VPN traffic by distributing Next Generation Threat Prevention inspection across multiple cores.   New in R80.20: SecureXL was significantly revised in R80.20. It no longer works in Linux kernel mode but now in user space. In kernel mode  resources (for example memory) are very limited. This has the advantage that more resources can be used in user space. The SecureXL driver takes a certain amount of kernel memory per core and that was adding up to more kernel memory than Intel/Linux was allowing. On the 23900 in particular, we could not leverage all the processor cores due to this limitation. By moving all or most of SecureXL to user space, it's possible to leverage more processor cores as the firewall can entirely run in user space.It still doesn't by default in R80.20 in non-VSX mode, but it can be enabled.  It also means certain kinds of low-level packet processing that could not easily be done in SecureXL because it was being done in the kernel now can. For VSX in particular, it means you can now configure the penalty box features on a per-VS basis. It also improves session establishment rates on the higher-end appliances. In addition, the following functions have been integrated in R80.20 SecureXL:         SecureXL on Acceleration Cards (AC)         Streaming over SecureXL         Lite Parsers         Async SecureXL         Scalable SecureXL         Acceleration stickiness         Policy push acceleration Throughput Acceleration - The first packets of a new TCP connection require more processing when processed by the firewall module. If the connection is eligible for acceleration, after minimal security processing the packet is offloaded to the SecureXL device associated with the proper egress interface. Subsequent packets of the connection can be processed on the accelerated path and directly sent from the inbound to the outbound interface via the SecureXL device. Connection Rate Acceleration  SecureXL also improves the rate of new connections (connections per second) and the connection setup/teardown rate (sessions per second). To accelerate the rate of new connections, connections that do not match a specified 5 tuple are still processed by SecureXL. For example, if the source port is masked and only the other 4 tuple attributes require a match. When a connection is processed on the accelerated path, SecureXL creates a template of that connection that does not include the source port tuple. A new connection that matches the other 4 tuples is processed on the accelerated path because it matches the template. The firewall module does not inspect the new connection, increasing firewall connection rates.SecureXL and the firewall module keep their own state tables and communicate updates to each other. Connection notification - SecureXL passes the relevant information about accelerated connections that match an accept template. Connection offload - Firewall kernel passes the relevant information about the connection from firewall connections table to SecureXL connections table. In addition to accept templates the SecureXL device is also able to apply drop templates which are derived from security rules where the action is drop. In addition to firewall security policy enforcement, SecureXL also accelerates NAT, and IPsec VPN traffic. QXL - Technology name for combination of SecureXL and QoS (R77.10 and above).This has no direct association with PXL. It is used exclusively for QoS. But also here it is possible to use the QoS path in combination with PSL. SAM card  and Falcon card (R80.20 and above) - Security Acceleration Module card. Connections that use SAM/Falcon card, are accelerated by SecureXL and are processed by the SAM/Falcon card's CPU instead of the main CPU (refer to 21000 Appliance Security Acceleration Module Getting Started Guide)). SecureXL use the following templates: If templating is used under SecureXL, the templates are created when the firewall ruleset is installed. Accept Template - Feature that accelerates the speed, at which a connection is established by matching a new connection to a set of attributes. When a new connection matches the Accept Template, subsequent connections are established without performing a rule match and therefore are accelerated. Accept Templates are generated from active connections according to policy rules. Currently, Accept Template acceleration is performed only on connections with the same destination port (using wildcards for source ports). Accept Tamplate is enabled by default if SecureXL is used. Drop Template - Feature that accelerates the speed, at which a connection is dropped by matching a new connection to a set of attributes. When a new connection matches the Drop Template, subsequent connections are dropped without performing a rule match and therefore are accelerated. Currently, Drop Template acceleration is performed only on connections with the same destination port (does not use wildcards for source ports).Drop Template is disabled by default if SecureXL is used. It can be activated via smart Dashboard and does not require a reboot of the firewall. NAT Templates - Using SecureXL Templates for NAT traffic is critical to achieve high session rate for NAT. SecureXL Templates are supported for Static NAT and Hide NAT using the existing SecureXL Templates mechanism. Normally the first packet would use the F2F path. However, if SecureXL is used, the first packet will not be forwarded to the F2F path if Accept Tamplate and NAT Template match. Enabling or disabling of NAT Templates requires a firewall reboot. R80.10 and lower:  NAT Template is disabled by default. R80.20 and above: NAT Template is enabled by design. SecureXL path: Fast path (Accelerated Path) - Packet flow when the packet is completely handled by the SecureXL device. It is processed and forwarded to the network.Note: In many discusions and images, the SXL path is marked with the "accelerated path". This also happened to me by mistake in this flowchart. Medium path (PXL) - The CoreXL layer passes the packet to one of the CoreXL FW instances to perform the processing (even when CoreXL is disabled, the CoreXL infrastructure is used by SecureXL device to send the packet to the single FW instance that still functions). When Medium Path is available, TCP handshake is fully accelerated with SecureXL. Rulebase match is achieved for the first packet through an existing connection acceleration template. SYN-ACK and ACK packets are also fully accelerated. However, once data starts flowing, to stream it for Content Inspection, the packets will be now handled by a FWK instance. Any packets containing data will be sent to FWK for data extraction to build the data stream. RST, FIN and FIN-ACK packets once again are only handled by SecureXL as they do not contain any data that needs to be streamed. This path is available only when CoreXL is enabled. Packet flow when the packet is handled by the SecureXL device, except for: IPS (some protections) VPN (in some configurations) Application Control Content Awareness Anti-Virus Anti-Bot HTTPS Inspection Proxy mode Mobile Access VoIP Web Portals. PXL vs. PSLXL - Technology name for combination of SecureXL and PSL. PXL was renamed to PSLXL in R80.20. This is from my point of view the politically correct better term. Medium path (CPASXL) - Now also CPAS use the SecureXL path in R80.20. CPAS works through the F2F path in R80.10 and R77.30. Now CPASXL is offered in SecureXL path in R80.20. This should lead to a higher performance. Check Point Active Streaming active streaming allow the changing of data and play the role of “man in the middle”. Several protocols uses CPAS, for example: Client Authentication, VoIP (SIP, Skinny/SCCP, H.323, etc.), Data Leak Prevention (DLP) blade, Security Servers processes, etc. I think it's not to be underestimated in tuning. Slow path  or Firewall path (F2F) - Packet flow when the SecureXL device is unable to process the packet. The packet is passed on to the CoreXL layer and then to one of the Core FW instances for full processing. This path also processes all packets when SecureXL is disabled. New in R80.20: Inline Streaming path, Medium Streaming path,  Host path and Buffer path - Are new SecureXL paths used in conjunction with Falcon cards. They are described in more detail in the following article "R80.x Security Gateway Architecture (Acceleration Card Offloading) ". Note to Falcon Cards: Theoretically and practically there are even more than these three paths. This has to do with the offloading of SAM and Falcon cards (new in R80.20)  and with QXL (Quality of Service) and other SecureXL technologies. It's beyond the scope of this one. Fast Accelerator - The Fast Acceleration feature (green) lets you define trusted connections to allow bypassing deep packet inspection on R80.20 JHF103 and above gateways. This feature significantly improves throughput for these trusted high volume connections and reduces CPU consumption. The CLI of the gateway can be used to create rules that allow you to bypass the SecureXL PSLXL path to route all connections through the fast path.  SecureXL chain modules (new in R80.20 and above) SecureXL has been significantly revised in R80.20. It now works in user space. This has also led to some changes in "fw monitor" There are new fw monitor chain (SecureXL) objects that do not run in the virtual machine. The new fw monitor chain modules (SecureXL) do not run in the virtual machine (vm). SecureXL inbound (sxl_in)                 > Packet received in SecureXL from networkSecureXL inbound CT (sxl_ct)           > Accelerated packets moved from inbound to outbound processing (post routing)SecureXL outbound (sxl_out)            > Accelerated packet starts outbound processingSecureXL deliver (sxl_deliver)          > SecureXL transmits accelerated packet There are more new chain modules in R80.20 vpn before offload (vpn_in)                > FW inbound preparing the tunnel for offloading the packet (along with the connection)fw offload inbound (offload_in)           > FW inbound that perform the offload fw post VM inbound  (post_vm)          > Packet was not offloaded (slow path) - continue processing in FW inbound CoreXL CoreXL is a performance-enhancing technology for Security Gateways on multi-CPU-core processing platforms. CoreXL enhances Security Gateway performance by enabling the processing CPU cores to concurrently perform multiple tasks. CoreXL provides almost linear scalability of performance, according to the number of processing CPU cores on a single machine. The increase in performance is achieved without requiring any changes to management or to network topology. On a Security Gateway with CoreXL enabled, the Firewall kernel is replicated multiple times. Each replicated copy, or FW instance, runs on one processing CPU core. These FW instances handle traffic concurrently, and each FW instance is a complete and independent FW inspection kernel. When CoreXL is enabled, all the FW kernel instances in the Security Gateway process traffic through the same interfaces and apply the same security policy. R80.20 CoreXL does not support these Check Point features: Overlapping NAT, VPN Traditional Mode,  6in4 traffic - this traffic is always processed by the global CoreXL FW instance #0 (fw_worker_0) and more (see CoreXL Known Limitations). Secure Network Distributor (SND) - Traffic entering network interface cards (NICs) is directed to a processing CPU core running the SND, which is responsible for:        Processing incoming traffic from the network interfaces        Securely accelerating authorized packets (if SecureXL is enabled)        Distributing non-accelerated packets among Firewall kernel instances (SND maintains global dispatching table - which connection was assigned to which instance) SND does not really touch any packet. The decision to stick to a particular FWK core is done at the first packet of connection on a very high level before anything else. Depending on SXL settings and in most of the cases, SXL can be offloading decryption calculations. However, in some other cases, such as with Route-Based VPN, it is done by FWK. Firewall Instance (fw_worker) - On a Security Gateway with CoreXL enabled, the Firewall kernel is replicated multiple times. Each replicated copy, or Firewall Instance, runs on one CPU processing core. These FW instances handle traffic concurrently, and each FW instance is a complete and independent Firewall inspection kernel. When CoreXL is enabled, all the Firewall kernel instances on the Security Gateway process traffic through the same interfaces and apply the same security policy. Dynamic Dispatcher - Rather than statically assigning new connections to a CoreXL FW instance based on packet's IP addresses and IP protocol (static hash function), the new dynamic assignment mechanism is based on the utilization of CPU cores, on which the CoreXL FW instances are running. The dynamic decision is made for first packets of connections, by assigning each of the CoreXL FW instances a rank, and selecting the CoreXL FW instance with the lowest rank. The rank for each CoreXL FW instance is calculated according to its CPU utilization. The higher the CPU utilization, the higher the CoreXL FW instance's rank is, hence this CoreXL FW instance is less likely to be selected by the CoreXL SND. The CoreXL Dynamic Dispatcher allows for better load distribution and helps mitigate connectivity issues during traffic "peaks", as connections opened at a high rate that would have been assigned to the same CoreXL FW instance by a static decision, will now be distributed to several CoreXL FW instances. Multi Queue - Network interfaces on a security gateway typically receive traffic at different throughputs; some are busier than others. At a low level, when a packet is received from the NIC, then a CPU core must be “interrupted” to the exclusion of all other processes, in order to receive the packet for processing. To avoid bottlenecks we allow multiple buffers, and therefore CPU cores, to be affined to an interface. Each affined buffer can “interrupt” its own CPU core allowing high volumes of inbound packets to be shared across multiple dispatchers. When most of the traffic is accelerated by the SecureXL, the CPU load from the CoreXLSND instances can be very high, while the CPU load from the CoreXL FW instances can be very low. This is an inefficient utilization of CPU capacity. By default, the number of CPU cores allocated to CoreXL SND instances is limited by the number of network interfaces that handle the traffic. Because each interface has one traffic queue, only one CPU core can handle each traffic queue at a time. This means that each CoreXL SND instance can use only one CPU core at a time for each network interface. Check Point Multi-Queue lets you configure more than one traffic queue for each network interface. For each interface, you can use more than one CPU core (that runs CoreXL SND) for traffic acceleration. This balances the load efficiently between the CPU cores that run the CoreXL SND instances and the CPU cores that run CoreXL FW instances.   Priority Queues -  In some situations a security gateway can be overwhelmed; in circumstances where traffic levels exceed the capabilities of the hardware, either legitimate traffic or from a DOS attack, it is vital that we can maintain management communications and continue to interact with dynamic routing neighbors. The Priority Queues functionality prioritizes control connections over data connections based on priority.   Affinity - Association of a particular network interface / FW kernel instance / daemon with a CPU core (either 'Automatic' (default), or 'Manual'). The default CoreXL interface affinity setting for all interfaces is 'Automatic' when SecureXL is installed and enabled. If SecureXL is enabled - the default affinities of all interfaces are 'Automatic' - the affinity for each interface is automatically reset every 60 seconds, and balanced between available CPU cores based on the current load. If SecureXL is disabled - the default affinities of all interfaces are with available CPU cores - those CPU cores that are not running a CoreXL FW instance or not defined as the affinity for a daemon. The association of a particular interface with a specific processing CPU core is called the interface's affinity with that CPU core. This affinity causes the interface's traffic to be directed to that CPU core and the CoreXL SND to run on that CPU core. The association of a particular CoreXL FW instance with a specific CPU core is called the CoreXL FW instance's affinity with that CPU core. The association of a particular user space process with a specific CPU core is called the process's affinity with that CPU core. The default affinity setting for all interfaces is Automatic. Automatic affinity means that if SecureXL is enabled, the affinity for each interface is reset periodically and balanced between the available CPU cores. If SecureXL is disabled, the default affinities of all interfaces are with one available CPU core. In both cases, all processing CPU cores that run a CoreXL FW instance, or defined as the affinity for another user space process, is considered unavailable, and the affinity for interfaces is not set to those CPU cores. The default affinity setting for all interfaces is Automatic. Automatic affinity means that if SecureXL is enabled, the affinity for each interface is reset periodically and balanced between the available CPU cores. If SecureXL is disabled, the default affinities of all interfaces are with one available CPU core. In both cases, all processing CPU cores that run a CoreXL FW instance, or defined as the affinity for another user space process, is considered unavailable, and the affinity for interfaces is not set to those CPU cores. Passive Streaming Library (PSL) - IPS infrastructure, which transparently listens to TCP traffic as network packets, and rebuilds the TCP stream out of these packets. Passive Streaming can listen to all TCP traffic, but process only the data packets, which belong to a previously registered connection. PXL - Technology name for combination of SecureXL and PSL. The maximal number of possible CoreXL IPv4 FW instances: Version Check Point Appliance Open Server R80.10 (Gaia 32-bit) 16 16 R80.10 (Gaia 64-bit) 40 40 R77.30 (Gaia 32-bit) 16 16 R77.30 (Gaia 64-bit) 32 32 USFW - In kernel-mode FW, the maximum number of running cores is limited to 40 because of the Linux/Intel limitation of 2GB kernel memory, and because CoreXL architecture needs to load a large driver (~40MB) dozens of times (according to the CPU number, and up to 40 times). Newer platforms that contain more than 40 cores (e.g., 23900) are not fully utilized. Now it is possible to use more then 40 CoreXL cores in R80.10+ user mode firewall. For more informations see sk149973, Management Core - New in R80.30+, you can also allocate a core for management traffic if you have 8 or more cores licensed, but this is not the default. R80.30+ feature for separating management from data traffic via Routing Separation and Resource Separation as described in sk138672. R80.40+ (automatically changes CoreXL, SNDˋs and the Multi-Queue) New in R80.40+Support for automatic allocation of CoreXL SNDs and Firewall instances that does not require a Security Gateway reboot.CoreXL and Multi-Queue: Improved out of the box experience - Security Gateway automatically changes the number of CoreXL SNDs and Firewall instances and the Multi-Queue configuration based on the current traffic load. Changing CoreXL split between FW workers and SND on the fly based on CPU utilization Deciding keys:The average utilization of CoreXL SNDs and FWs are regularly sampled. If either CoreXL SNDs or FWs utilization is higher than the other, perform an estimate of utilization post “migrating” a CPU to the other group.  Note when SMT is on, change is doubled. Supported on OS 3.10 (USFW/Kernel). Check Point appliances with 8 cores or more and VSX is currently a limitation. Supported on Cluster HA and VSLS is currently a limitation. Flows:If more SNDs are needed: Find least utilized CoreXL FW instance Stop dispatching new connections to the least utilized CoreXL FW instance Move the CoreXL FW instance to the CPU of next least utilized CoreXL FW instance Turn on a new MQ queue on the “evicted” CPUNote: Eligible CoreXL SNDs must have a MQ queue ready If more FWs (CoreXL) are needed: Choose the last “stopped” CoreXL FW instance Turn off MQ queue from the CPU it originally occupied Move the chosen CoreXL FW instance to the original CPU it occupied Start dispatching new connections to that CoreXL FW instanceNote: No more than the maximum number of FWs can be added FW Monitor Inspection Points  There are new fw monitor inspection points (red) when a packet passes through a R80.20+ Security Gateway: Inspection point Name of fw monitor inspection point Relation to firewall VM Available since version i Pre-Inbound Before the inbound FireWall VM                            (for example, eth1:i) always I Post-Inbound After the inbound FireWall VM                               (for example, eth1:I) always id Pre-Inbound VPN Inbound before decrypt                                          (for example, eth1:id) R80.20 ID Post-Inbound VPN Inbound after decrypt                                             (for example, eth1:ID) R80.20 iq Pre-Inbound QoS Inbound before QoS                                               (for example, eth1:iq) R80.20 IQ Post-Inbound QoS Inbound after QoS                                                  (for example, eth1:IQ) R80.20 o Pre-Outbound Before the outbound FireWall VM                           (for example, eth1:o) always O Post-Outbound After the outbound FireWall VM                              (for example, eth1:O) always e oe Pre-Outbound VPN Outbound before encrypt                                        (for example, eth1:e)    in R80.10                                                                                 (for example, eth1:oe)  in R80.20 R80.10 R80.20 E OE Post-Outbound VPN Outbound after encrypt                                           (for example, eth1:E)    in R80.10                                                                                 (for example, eth1:OE)  in R80.20 R80.10 R80.20 oq Pre-Outbound QoS Outbound before QoS                                             (for example, eth1:oq) R80.20 OQ Post-Outbound QoS Outbound after QoS                                                (for example, eth1:OQ) R80.20   The "Pre-Encrypt" fw monitor inspection point (e) and the "Post-Encrypt" fw monitor inspection point (E) are new in R80 and above. Note: It's true, they only exist on the outbound side for encrypting packets not for decrypting packets on inbound side. New in R80.20+: In Firewall kernel (now also SecureXL), each kernel is associated with a key witch specifies the type of traffic applicable to the chain modul. # fw ctl chain Key Function ffffffff IP Option Stip/Restore 00000001 new processed flows 00000002 wire mode 00000003 will applied to all ciphered traffic (VPN) 00000000 SecureXL offloading (new in R80.20+)   Content Inspection For more details see article: R80.x Security Gateway Architecture (Content Inspection)  Content inspection is a very complicated process, it is only shown in the example for R80.10 IPS and R80.10 App Classifier. It is also possible for other services. Please refer to the corresponding SK's. In principle, all content is processed via the Context Management Infrastructure (CMI) and CMI loader and forwarded to the corresponding daemon. Session-based processing enforces advanced access control and threat detection and prevention capabilities. To do this we assemble packets into a stream, parse the stream for relevant contexts and then security modules inspect the content. When possible, a common pattern matcher does simultaneous inspection of the content for multiple security modules. In multi-core systems this processing is distributed amongst the cores to provide near linear scalability on each additional core.   Security modules use a local cache to detect known threats. This local cache is backed up with real-time lookups of an  cloud service. The result of cloud lookups are then cached in the kernel for subsequent lookups. Cloud assist also enhances unknown threat detection and prevention. In particular a file whose signature is not known in a local cache is sent to our cloud service for processing where compute, disk and memory are virtually unlimited. Our sandboxing technology, SandBlast Threat Emulation, identifies threats in their infancy before malware has an opportunity to deploy and evade detection. Newly discovered threats are sent to the cloud database to protect other Check Point connected gateways and devices. When possible, active content is removed from files which are then sent on to the user while the emulation is done. Passive Streaming Library (PSL) -Packets may arrive out of order or may be legitimate retransmissions of packets that have not yet received an acknowledgment. In some cases a retransmission may also be a deliberate attempt to evade IPS detection by sending the malicious payload in the retransmission. Security Gateway ensures that only valid packets are allowed to proceed to destinations. It does this with Passive Streaming Library (PSL) technology. PSL is an infrastructure layer, which provides stream reassembly for TCP connections. The gateway makes sure that TCP data seen by the destination system is the same as seen by code above PSL.This layer handles packet reordering, congestion handling and is responsible for various security aspects of the TCP layer such as handling payload overlaps, some DoS attacks and others. The PSL layer is capable of receiving packets from the firewall chain and from SecureXL module. The PSL layer serves as a middleman between the various security applications and the network packets. It provides the applications with a coherent stream of data to work with, free of various network problems or attacks The PSL infrastructure is wrapped with well defined APIs called the Unified Streaming APIs which are used by the applications to register and access streamed data. Protocol Parsers - The Protocol Parsers main functions are to ensure compliance to well-defined protocol standards, detect anomalies if any exist, and assemble the data for further inspection by other components of the IPS engine. They include HTTP, SMTP, DNS, IMAP, Citrix, and many others. In a way, protocol parsers are the heart of the IPS system. They register themselves with the streaming engine (usually PSL), get the streamed data, and dissect the protocol.The protocol parsers can analyze the protocols on both Client to Server (C2S) and Server to Client (S2C) directions. The outcome of the protocol parsers are contexts. A context is a well defined part of the protocol, on which further security analysis can be made. Examples of such contexts are HTTP URL, FTP command, FTP file name, HTTP response, and certain files. Context Management Infrastructure (CMI) and Protections - The Context Management Infrastructure (CMI) is the "brain" of the content inspection. It coordinates different components, decides which protections should run on a certain packet, decides the final action to be performed on the packet and issues an event log.CMI separates parsers and protections. Protection is a set of signatures or/and handlers, where Signature - a malicious pattern that is searched for Handler - INSPECT code that performs more complex inspection CMI is a way to connect and manage parsers and protections. Since they are separated, protections can be added in updates, while performance does not depend on the number of active protections. Protections are usually written per protocol contexts - they get the data from the contexts and validate it against relevant signatures Based on the IPS policy, the CMI determines which protections should be activated on every context discovered by a protocol parser. If policy dictates that no protections should run, then the relevant parsers on this traffic are bypassed in order to improve performance and reduce potential false positives. When a protection is activated, it can decide whether the given packet or context is OK or not. It does not decide what to do with this packet. The CMI is responsible for the final action to be performed on the packet, given several considerations. The considerations include: Activation status of the protection (Prevent, Detect, Inactive) Exceptions either on traffic or on protection Bypass mode status (the software fail open capability) Troubleshooting mode status Are we protecting the internal network only or all traffic CMI Loader - collects signatures from multiple sources (e.g. IPS, Application Control,...) and compiles them together into unified Pattern Matchers (PM) (one for each context - such as URL, Host header etc.). Pattern Matcher -The Pattern Matcher is a fundamental engine within the new enforcement architecture. Pattern Matcher quickly identifies harmless packets, common signatures inmalicious packets, and does a second level analysis to reduce false positives. Pattern Matcher engine provides the ability to find regular expressions on a stream of data using a two tiered inspection process. UP Manager - The UP Manager controls all interactions of the components and interfaces with the Context Management Infrastructure (CMI) Loader, the traffic director of the CMI. The UP Manager also has a list of Classifiers that have registered for “first packets” and uses a bitmap to instruct the UP Classifier to execute these Classifier Apps to run on the packet. The “first packets” arrive directly from the CMI. Parsing of the protocol and streaming are not needed in this stage of the connection. For “first packets” the UP Manager executes the rule base.   Classifier - When the “first packet” rule base check is complete Classifiers initiate streaming for subsequent packets in the session. The “first packet” rule base check identifies a list of rules that possibly may match and a list of classifier objects (CLOBs) that are required to complete the rule base matching process. The Classifier reads this list and generates the required CLOBs to complete the rule base matching. Each Classifier App executes on the packet and tells the result of the CLOB to the UP Manager. The CMI then tells the Protocol Parser to enable streaming. In some cases Classifier Apps do not require streaming, e.g. the first packet information is sufficient. Then the rule base decision can be done on the first packet.       Dynamic Objects       Domain Objects       Only the firewall is enabled On subsequent packets the Classifier can be contacted directly from the CMI using the CMI Loader infrastructure, e.g. when the Pattern Matcher has found a match it informs the CMI it has found application xyz. The CMI Loader passes this information to the Classifier. The Classifier runs the Classification Apps to generate CLOBs required for Application Control and sends the CLOBs to the Observer.   Observer - The Observer decides if enough information is known to publish a CLOB to the security policy. CLOBs are observed in the context of their transaction and the connection that the transaction belongs to. The Observer may request more CLOBs for a dedicated packet from the Classifier or decides that it has sufficient information about the packet to execute the rule base on the CLOB, e.g. if a file type is needed for Content Awareness and the gateway hasn’t yet received the S2C response containing the file. Executing the rule base on a CLOB is called “publishing a CLOB”. The Observer may wait to receive more CLOBs that belong to the same transaction before publishing the CLOBs.   Security Policy - The Security Policy receives the CLOB published by the Observer. The CLOB includes a description of the Blade it belongs to so that matching can be performed on a column basis. The security policy saves the current state on the transaction Handle; either to continue the inspection or final match. The first packets are received directly from the UP Manager. Subsequent packets are received by the rule base from the Observer. Handle - Each connection may consist of several transactions. Each transaction has a Handle. Each Handle contains a list of published CLOBs. The Handle holds the state of the security policy matching process. The Handle infrastructure component stores the rule base matching state related information.   Subsequent Packets - Subsequent packets are handled by the streaming engine. The streaming engine notifies the Classifier to perform the classification. The Classifier will notify the UP Manager about the performed classification and pass the CLOBs to the Observer. The CLOBs will then be received by the Observer that will need to wait for information from the CMI. The CMI sends the information describing the result of the Protocol Parser and the Pattern Matcher to the Classifier. The Classifier informs the UP Manager and sends the CLOB to the Observer. The UP Manager then instructs the Observer to publish the CLOBs to the Rule Base. The Rule Base is executed on the CLOBs and the result is communicated to the UP Manager. The CLOBs and related Rule Base state are stored in the Handle. The UP Manager provides the result of the rule base check to the CMI that then decides to allow or to drop the connection. The CMI generates a log message and instructs the streaming engine to forward the packets to the outbound interface. Content Awareness (CTNT) -  is a new blade introduced in R80.10 as part of the new Unified Access Control Policy. Using Content Awareness blade as part of Firewall policy allows the administrator to enforce the Security Policy based on the content of the traffic by identifying files and its content. Content Awareness restricts the Data Types that users can upload or download.Content Awareness can be used together with Application Control to enforce more interesting scenarios (e.g. identify which files are uploaded to DropBox). References SecureKnowledge: SecureXL SecureKnowledge: NAT Templates SecureKnowledge: VPN Core SecureKnowledge: CoreXL SecureKnowledge: CoreXL Dynamic Dispatcher in R77.30 / R80.10 and above SecureKnowledge: Application Control SecureKnowledge: URL Filtering SecureKnowledge: Content Awareness (CTNT) SecureKnowledge: IPS SecureKnowledge: Anti-Bot and Anti-Virus SecureKnowledge: Threat Emulation SecureKnowledge: Best Practices - Security Gateway Performance SecureKnowledge: MultiCore Support for IPsec VPN in R80.10 and aboveSecureKnowledge: SecureXL Fast Accelerator (fw fast_accel) for R80.20 and above. Download Center: R80.10 Next Generation Threat Prevention PlatformsDownload Center: R77 Security Gateway Packet FlowDownload Center: R77 Security Gateway ArchitectureSupport Center: Check Point Security Gateway Architecture and Packet Flow Checkmates: Check Point Threat Prevention Packet Flow and Architecture Checkmates: fw monitor inspection point e or E Infinity NGTP architecture  Security Gateway Packet Flow and Acceleration - with Diagrams R80.x Security Gateway Architecture (Content Inspection)  Questions and Answers   Q: Why this diagram with SecureXL and CoreXL?A: I dared to map both worlds of CoreXL and SecureXL in a diangram. This is only possible to a limited extent, as these are different technologies. It's really an impossible mission. Why!- CoreXL is a mechanism to assign, balance and manage CPU cores. CoreXL SND makes a decision to "stick" particular connection going through to a specific FWK instance.- SecureXL certain connections could avoid FW path partially (packet acceleration) or completely (acceleration with templates) Q: Why both technologies in one flowchart?A: There are both technologies that play hand in hand. The two illustrations become problematic, e.g. in the Medium Path. Q: Why in the Medium Path?A: Here, the packet-oriented part (SecureXL) cannot be mapped with the connection-based part (CoreXL). Therefore, the following note from an new Check Point article from Valeri Loukine   (Security Gateway Packet Flow and Acceleration - with Diagrams - 08-07-2018) and original article from Moti Sagey (Check Point Threat Prevention Packet Flow and Architecture - 04-25-2017) :When Medium Path is available, TCP handshake is fully accelerated with SecureXL. Rulebase match is achieved for the first packet through an existing connection acceleration template. SYN-ACK and ACK packets are also fully accelerated. However, once data starts flowing, to stream it for Content Inspection, the packets will be now handled by a FWK instance. Any packets containing data will be sent to FWK for data extraction to build the data stream. RST, FIN and FIN-ACK packets once again are only handled by SecureXL as they do not contain any data that needs to be streamed. Q: What is the point of this article?A: To create an overview of both worlds with regard to the following innovations in R80.x:- new fw monitor inspection points in R80 (e and E)- new MultiCore VPN with dispatcher- new UP Manager in R80 Q: Why is there the designation "Logical Packet Flow"? A: Since the logical flow in the overview differs from the real flow. For example, the medium path is only a single-logical representation of the real path. This was necessary to map all three paths (F2F, SXL, PXL) in one image. That is why the name "Logical Packet Flow". Q: What's the next step? A: I'm thinking about how to make the overview even better. Q: Wording? A: It was important for me that the right terms from Check Point were used. Many documents on the Internet use the terms incorrectly. Therefore I am grateful to everyone who still finds wording errors here. Q: What's the GA version? A: This version has approved by Check Point representative, and we agreed that this should be the final version. Versions   Version R80.40:1.6a - new R80.30+ flowchart with SK104468 and SK156672 (13.01.2020) Version R80.30:1.5a - added new R80.30+ flowchart picture and pdf, add QoS path in flowchart, added R80.30 new path names (16.12.2019)1.4a - update - automatically changes the number of CoreXL SNDs and Firewall instances and the Multi-Queue (02.09.2019) 1.4b - update - http/2 support (03.09.2019)1.4c - update - Host path, Buffer path, Inline path (04.09.2019)1.4c - update - now eight firewall paths are possible (14.09.2019)1.4d - R&D guys checks the logical packet flowchart for R80.20 and gives green lights (05.11.2019)  1.4e - add R80.20 JHF103 fast accelerator feature (15.11.2019)1.4f  - update flowchart with "Fast Accel" (16.11.2019)1.4g - update R80.40 EA infos (27.11.2019)1.4h - new table with R80.10/ R80.20/ R80.30/ R80.40 paths (15.12.2019) 1.3a - update R80.30 managment core ( 25.07.2019 ) 1.3b - update R80.30 https SNI (28.07.2019)1.3c - update R80.20 new async flowchart (15.08.2019)1.3d - update R80.20 packet reinjection (20.08.2019) Version R80.20: 1.2a - article update to R80.20 (16.11.2018)1.2b - update inspection points id, iD and more (19.11.2018)1.2c - update maximal number of CoreXL IPv4 FW instances (20.11.2018)1.2d - update R80.20 new functions (05.11.2018)1.2e - bug  fix (06.01.2019)1.2f - update fw monitor inspection points ie/ IE (23.01.2019)1.2g - update sk 151114 VPN+SecureXL (20.04.2019)1.2h - update fw monitor inspection points (10.07.2019) Version R80.10: 1.1b - final GA version (08.08.2018)1.1c - change words to new R80 terms (08.08.2018)1.1d - correct a mistak with SXL and "Accelerated path" (09.08.2018)1.1e - bug fixed (29.08.2018)1.1f - QoS (24.09.2018)1.1g - correct a mistak in pdf (26.09.2018)1.1h - add PSLXL and CPASXL path in R80.20 (27.09.2018)1.1i - add "Medium Streaming Path" and "Inline Streaming Path" in R80.20 (28.09.2018)1.1j - add "new R80.20 chain modules" (22.10.2018)1.1k - bug fix chain modules (04.11.2018)1.1l - add "chaptures" (10.11.2018)1.1m - add R80.20 fw monitor inspection points "oe" and "OE" (17.12.2018) R80.10 EA Version: 1.0a - final version (28.07.2018)1.0c - change colors (28.07.2018)1.0d - add content inspection text (29.07.2018)1.0e - add content inspection drawing (29.07.2018)1.0f - update links (29.07.2018)1.0g - update content inspection drawing flows and action (30.07.2018)1.0h - change SecureXL flow (30.07.2018)1.0i - correct SecureXL packet flow (01.08.2018)1.0j - correct SecureXL names and correct "fw monitor inspection points" (02.08.2018)1.0k - add new article "Security Gateway Packet Flow and Acceleration - with Diagrams" from 06.08.2018  to "References and links" (06.08.2018)1.0l - add "Questions and Answers" (07.08.2018)1.0m - R&D guys checks the logical packet flowchart for R80.10 and gives green lights (08.08.2018) Copyright by Heiko Ankenbrand  1994 - 2020