cancel
Showing results for 
Search instead for 
Did you mean: 
Create a Post
Enterprise Appliances and Gaia OS

Have questions about Security Gateway Appliances, Gaia OS, CoreXL, SecureXL, or ClusterXL? This is where to ask them! This also includes legacy operating systems like SecurePlatform, IPSO, or XOS.

For Small Business Security appliances (600/700/1200R/1400/1500), see the SMB Appliances and SMP space.

Appliance vs. Virtual Machine

Hi Mates! I'm wondering whats the benefit of using VS on for example VMWare Esxi instead of normal CheckPoint Appliance. When should choose VS over CHP App? For example I would like to setup enviroment with SandBlast which will operate on over 10000 users?  Best Wishes,Nbto

High memory usage

Hello,Wanted to share the issue we have with our gateway.  We have following blades enabled:fw urlf appi identityServer SSL_INSPECT content_awareness monAppliance is with 16gb, running latest R80.30.The problem we are having is that at some point memory usage increases sharply and it never comes down, unless we reboot appliance. This is causing issues to the traffic because some connections are getting disconnected during occurrence. I can't find in top (shift+m) any process which would contribute to this behaviour.I hope I am not alone with this issue, so please give a shout if you have something similar. Some of the occurrences from the past to show what happens: 

Legacy Policy or Unified Policy for Mobile Access Blade on R80.20 and above

Hi,As per subject, which mode of policy I should implement for R80.20 an above? Some may feedback that still go to legacy policy.  But the unified policy is the beauty of R80 platform.However, some setting still in legacy mode. So this is something that may confuse customer.Will the next release combine all setting into new R80 console instead of load the setting on legacy Smart Dashboard?

View logs of Cluster switch over

Hello,Where can I see logs about the Cluster XL switch over ? To see the reason why it happened.

GRE Tunnel

Hi Experts,I believe the the GRE tunnel cannot be terminated in the Check Point firewalls (Please confirm if by any way or in any version hardware or software or any model its supported). Also this GRE is proprietary of other vendor, is that a reason CP does not support or any other technical reasons there? Please let me know, any information is highly appreciable.Thanks in advance.Vijay 

LOM Interface syslog?

Hi all,I tried to send the Event Logs from the LOM Interface of a Check Point Firewall to a syslog server.It seems that there is no option for sending syslog out of LOM (??).Other vendors with their Out-of-Band Managing Interfaces line Supermicro (IPMI), HP iLO, Dell iDrac etc. are all able to send syslog messages out of their OOB Solution.Did I miss something or how can I enable syslog in LOM?Thanks,Peter

How to delete admin user

Hello all,I try to delete admin user. I didn't find any sk about this issue. Sk's related to disable admin user.Thanks.

Gaia HealthCheck Script v7.07 released

Check Point released v7.07 of it's Gaia HealthCheck Script Script author: @Nathan_Davieau (LinkedIn profile)QA Director: @Barak_Ran (LinkedIn profile) What's new: Automatically retrieve latest CPUSE, JHF, CPINFO build numbers from Check Point website What's MISSING: Recognition of expired 1-year licenses to avoid warnings on such systems  (example: CPSB-COMP-5-1Y) Recognition of Non-Raid environments to avoid warnings on such systems (example: ESXi hosts) Download Package Link Date  healthcheck.sh script v7.07 13Nov2019

sysctl net.ipv4.tcp_timestamps

Hi, we see on a checkpoint 5900 R80.10 cluster when Mac and Linux clients are going to certain websites that those websites load very slow or not at all. In tcpdump traces we see a lot of retransmission and dup ack's stalling the TCP session. In Windows we do not see this behaviour at all. We finally found this to happen when on the client this is set: net.ipv4.tcp_timestamps=1. In Linux you can disable this and then we do not see this issue but on Mac since El Capitan you can not disable this anymore. When you change this setting on a Windows client by netsh int tcp set global timestamps=enabled  than you have the same behaviour. When using a proxy server for Mac clients with the tcp timestamps setting disabled also this problem disappears.When the Mac and Linux clients are connected to a 1490 SMB this behaviour does not appear, so it is the combination client, Mac & Linux with net.ipv4.tcp_timestamps=1 set and our Checkpoint 5900 with R80.10 (although we also saw this on a 12210 with R77.x in 2016 when  Mac went to Yosemite. We could only replicate it then when the Checkpoint had a high load and this behaviour disappeared after some tweaking with the multiple processors and added more memory.)On the gateway policy we disabled all IPS, TCP Inspection settings but problem persists. Anybody else aware of some setting so the checkpoint works good with clients with tcp timestamps enabled ?kind regards,Mikel Aanstoot

R80.x Performance Tuning Tip – Multi Queue

What is Multi Queue?   It is an acceleration feature that lets you assign more than one packet queue and CPU to an interface. When most of the traffic is accelerated by the SecureXL, the CPU load from the CoreXL SND instances can be very high, while the CPU load from the CoreXL FW instances can be very low. This is an inefficient utilization of CPU capacity. By default, the number of CPU cores allocated to CoreXL SND instances is limited by the number of network interfaces that handle the traffic. Because each interface has one traffic queue, only one CPU core can handle each traffic queue at a time. This means that each CoreXL SND instance can use only one CPU core at a time for each network interface. Check Point Multi-Queue lets you configure more than one traffic queue for each network interface. For each interface, you can use more than one CPU core (that runs CoreXL SND) for traffic acceleration. This balances the load efficiently between the CPU cores that run the CoreXL SND instances and the CPU cores that run CoreXL FW instances. Important - Multi-Queue applies only if SecureXL is enabled. Chapter More interesting articles: - R80.x Architecture and Performance Tuning - Link Collection- Article list (Heiko Ankenbrand) Multi-Queue Requirements and Limitations Tip 1 Multi-Queue is not supported on computers with one CPU core. Network interfaces must use the driver that supports Multi-Queue. Only network cards that use the igb (1Gb), ixgbe (10Gb), i40e (40Gb), or mlx5_core (40Gb) drivers support the Multi-Queue. You can configure a maximum of five interfaces with Multi-Queue. You must reboot the Security Gateway after all changes in the Multi-Queue configuration.       For best performance, it is not recommended to assign both SND and a CoreXL FW instance to the same CPU core. Do not change the IRQ affinity of queues manually. Changing the IRQ affinity of the queues manually can adversely affect performance. Multi-Queue is relevant only if SecureXL and CoreXL is enabled. Do not change the IRQ affinity of queues manually. Changing the IRQ affinity of the queues manually can adversely affect performance. You cannot use the “sim affinity” or the  “fw ctl affinity” commands to change and query the IRQ affinity of the Multi-Queue interfaces. The number of queues is limited by the number of CPU cores and the type of interface driver: Network card driver Speed Maximal number of RX queues igb 1 Gb 4 ixgbe 10 Gb 16 i40e 40 Gb 14 mlx5_core 40 Gb 10      The maximum RX queues limit dictates the largest number of SND/IRQ instances that can empty packet buffers for an individual interface using that driver that has Multi-Queue enabled.  Multi-Queue does not work on 3200 / 5000 / 15000 / 23000 appliances in the following scenario (sk114625) MQ is enabled for on-board interfaces (e.g., Mgmt, Sync) the number of active RX queues was set to either 3 or 4 (with cpmq set rx_num igb <number> command) This problem was fixed in: Check Point R80.10 Jumbo Hotfix Accumulator for R77.30 - since Take_198 The number of traffic queues is limited by the number of CPU cores and the type of network interface card driver.The on-board interfaces on these appliances use the igb driver, which supports up to 4 RX queues.However, the I211 controller on these on-board interfaces supports only up to 2 RX queues. When Multi-Queue will not help Tip 2 When most of the processing is done in CoreXL - either in the Medium path, or in the Firewall path (Slow path). All current CoreXL FW instances are highly loaded, so there are no CPU cores that can be reassigned to SecureXL. When IPS, or other deep inspection Software Blades are heavily used. When all network interface cards are processing the same amount of traffic. When all CPU cores that are currently used by SecureXL are congested. When trying to increase traffic session rate. When there is not enough diversity of traffic flows. In the extreme case of a single flow, for example, traffic will be handled only by a single CPU core. (Clarification: The more traffic is passing to/from different ports/IP addresses, the more you benefit from Multi-Queue. If there is a single traffic flow from a single Client to a single Server, then Multi-Queue will not help.) Multi-Queue is recommended Load on CPU cores that run as SND is high (idle < 20%). Load on CPU cores that run CoreXL FW instances is low (idle > 50%). There are no CPU cores left to be assigned to the SND by changing interface affinity. Multi-Queue support on Appliance vs. Open Server Gateway type Network interfaces that support the Multi-Queue Check Point Appliance MQ is supported on all applications that use the following drivers igb, ixgbe, i40e, mlx5_core. These expansion line cards for 4000, 12000, and 21000 appliances support the Multi-Queue: CPAC-ACC-4-1C CPAC-ACC-4-1F CPAC-ACC-8-1C CPAC-ACC-2-10F CPAC-ACC-4-10F This expansion line card for 5000, 13000, and 23000 appliances supports the Multi-Queue: ·         CPAC-2-40F-B Open Server Network cards that use igb (1Gb), ixgbe (10Gb), i40e (40Gb), or mlx5_core (40Gb) drivers support the Multi-Queue.     Multi-Queue support on Open Server (Intel Network Cards) Tip 3   The following list shows an overview with all Intel cards from Check Point HCL for open server from 11/21/2018.   The list is cross-referenced to the Intel drivers. I do not assume any liability for the correctness of the information. These lists should only be used to help you find the right drivers. It is not an official document of Check Point!   So please always read the official documents of Check Point.  Intel network card Ports Chipset PCI ID Driver PCI Speed MQ 10 Gigabit AT 1 82598EB 8086:25e7 ixgbe PCI-E 10G  Copper yes 10 Gigabit CX4 2 82598EB 8086:10ec ixgbe PCI-E 10G Copper yes 10 Gigabit XF family (Dual and Single Port models, SR and LR) 2 82598 8086:10c6 Ixgbe PCI-E 10G Fiber yes Ethernet Converged Network Adapter X540-T2 2 X540 8086:1528 ixgbe PCI-E 100/1G/10GCopper yes Ethernet Server Adapter I340-T2 2 82580 - Igb PCI-E 10/100/1GCopper yes Ethernet Server Adapter I340-T4 2 82580 - Igb PCI-E 10/100/1G Copper yes Ethernet Server Adapter X520 X520-SR2, X520-SR1, X520-LR1, X520-DA2 2 X520 - ixgbe PCI-E 10G Fiber yes Gigabit VT Quad Port Server Adapter 4 82575GB 8086:10d6 igb PCI-E 10/100/1G Copper yes Intel Gigabit ET2 Quad Port Server Adapter 4   - igb PCI-E 1G Copper yes PRO/10GbE CX4 1 82597EX 8086:109e Ixgb PCI-X 10G Copper no PRO/10GbE LR 1 82597EX 8086:1b48 Ixgb PCI-X 10G Fiber no PRO/10GbE SR 1 82597EX 8086:1a48 Ixgb PCI-X 10G Fiber no PRO/1000 Dual 82546GB 2 82546GB 8086:108a E1000 PCI-E 10/100/1G Copper no Pro/1000 EF Dual 2 82576 8086:10e6 Igb ? PCI-E 1G Fiber yes ? Pro/1000 ET Dual port Server Adapter 2 82576   igb PCI-E 1G Copper yes PRO/1000 ET Quad Port Server Adapter 4 82576 8086:10e8 Igb PCI-E 10/100/1G Copper yes PRO/1000 GT Quad 4 82546 8086:10b5 E1000 PCI-X 10/100/1G Copper no PRO/1000 MF 1 82546 ? 82545 ? - E1000 PCI-X 1G Fiber no PRO/1000 MF (LX) 1 82546 ? 82545 ? - E1000 PCI-X 1G Fiber no PRO/1000 MF Dual 2 82546 ? 82545 ? - E1000 PCI-X 1G Fiber no PRO/1000 MF Quad 4 82546 ? 82545 ? - E1000 PCI-X 1G Fiber no PRO/1000 PF 1 82571 ? 8086:107e E1000 PCI-E 1G Fiber no PRO/1000 PF Dual 2 82571 ? 8086:115f E1000 PCI-E 1G Fiber no PRO/1000 PF Quad Port Server Adapter 4 82571 ? 8086:10a5 E1000 PCI-E 1G Fiber no PRO/1000 PT 1 82571 8086:1082 E1000 PCI-E 10/100/1G Copper no PRO/1000 PT Dual 2 82571 8086:105e E1000 PCI-E 10/100/1G Copper no PRO/1000 PT Dual UTP 2 82571 8086:108a E1000 PCI-E 10/100/1G Copper no PRO/1000 PT Quad 4 82571 8086:10a4 E1000 PCI-E 10/100/1G Copper no PRO/1000 PT Quad Low Profile 4 82571 8086:10bc E1000 PCI-E 10/100/1G Copper no PRO/1000 XF 1 82544   E1000 PCI-X 1G Fiber no  For all "?" I could not clarify the points exactly.  Multi-Queue support on Open Server (HP and IBM Network Cards) Tip 4 The following list shows an overview with all HP cards from Check Point HCL for open server from 11/22/2018.   The list is cross-referenced to the Intel drivers. I do not assume any liability for the correctness of the information. These lists should only be used to help you find the right drivers. It is not an official document of Check Point!   So please always read the official documents of Check Point. HP network card Ports Chipset PCI ID Driver PCI Speed MQ Ethernet 1Gb 4-port 331T 4 BCM5719 14e4:1657 tg3 PCI-E 1G Copper no Ethernet 1Gb 4-port 366FLR 4 Intel  I350 8086:1521 igb PCI-E 1G Copper yes Ethernet 1Gb 4-port 366T 4 Intel  I350 8086:1521 igb PCI-E 1G Copper yes Ethernet 10Gb 2-port 560SFP+ 2 Intel 82599EB 0200: 8086:10fb ixgbe PCI-E 10G Fiber yes Ethernet 10Gb 2-port 561FLR-T 2 Intel X540-AT2 8086:1528 ixgbe PCI-E 10G Copper yes HPE Ethernet 10Gb 2-port 562FLR-SFP+ 2 Intel X710 8086:1572 i40e PCI-E 10G Copper yes Ethernet 10Gb 2-port 561T 2 Intel X540-AT2 8086:1528 ixgbe PCI-E 10G Copper yes NC110T 1 Intel 82572GI 8086:10b9 E1000 PCI-E 10/100/1G Copper no NC320T 1 BCM5721 KFB 14e4:1659 tg3 PCI-E 10/100/1G Copper no NC325m Quad Port 4 BCM5715S 14e4:1679 tg3 PCI-E 1G Copper no NC326m PCI Express Dual Port 1Gb Server Adapter for c-Class Blade System 2 BCM5715S   tg3 PCI-E 1G Copper no NC340T 4 Intel 82546GB 8086:10b5 E1000 PCI-X 10/100/1G Copper no NC360T 2 Intel 82571EB 8086:105e E1000 PCI-E 10/100/1G Copper no NC364T Official site 4 Intel 82571EB 8086:10bc E1000 PCI-E 10/100/1G Copper no NC365T PCI Express Quad Port 4 Intel82580 8086:150e igb PCI-E 10/100/1G Copper yes NC373F 1 Broadcom 5708 14e4:16ac bnx2 PCI-E 1G Copper no NC373m Dual Port 2 BCM5708S 14e4:16ac bnx2 PCI-E 10/100/1G Copper no NC373T 1 Broadcom 5708 14e4:16ac bnx2 PCI-E 10/100/1G Copper no NC380T PCI Express Dual Port Multifunction Gigabit server 2 BCM5706 - bnx2 PCI-E 10/100/1G Copper no NC522SFP Dual Port 10GbE Server Adapter 2 NX3031 4040:0100 ??? PCI-E 10G Fiber no NC550SFP Dual Port 10GbE Server Adapter Official site 2 Emulex OneConn 19a2:0700 be2net PCI-E 10G Fiber no NC552SFP 10GbE 2-port Ethernet Server 2 Emulex OneConn 19a2:0710 be2net PCI-E 10G Fiber no NC7170 2 Intel  82546EB 8086:1010 E1000 PCI-X 10/100/1G Copper no For all "?" I could not clarify the points exactly.   IBM network card Ports Chipset PCI ID Driver PCI Speed MQ Broadcom 10Gb 4-Port Ethernet Expansion Card (CFFh) for IBM BladeCenter 4 BCM57710   bnx2x PCI-E 10G Fiber no Broadcom NetXtreme Quad Port GbE network Adapter 4 I350   igb PCI-E 1G Copper yes NetXuleme 1000T 1 ??? (1)   ??? PCI-X 10/100/1G Copper ??? NetXuleme 1000T Dual 2 ??? (1)   ??? PCI-X 10/100/1G Copper ??? PRO/1000 PT Dual Port Server Adapter 2 82571GB   E1000 PCI-E 10/100/1G Copper no  (1) These network cards can't even be found at Goggle. Notes to Intel igb and ixgbe driver I used the LKDDb Database to identify the drivers. LKDDb is an attempt to build a comprensive database of hardware and protocols know by Linux kernels. The driver database includes numeric identifiers of hardware, the kernel configuration menu needed to build the driver and the driver filename. The database is build automagically from kernel sources, so it is very easy to have always the database updated. This was the basis of the cross-reverence between Check Point HCL and Intel drivers. Link to LKDDb web database:https://cateee.net/lkddb/web-lkddb/ Link to LKDDb database driver: igb, ixgbe, i40e, mlx5_core     Here you can find the following output for all drivers e.g. igb: Numeric ID (from LKDDb) and names (from pci.ids) of recognized devices: vendor: 8086 ("Intel Corporation"), device: 0438 ("DH8900CC Series Gigabit Network Connection") vendor: 8086 ("Intel Corporation"), device: 10a9 ("82575EB Gigabit Backplane Connection") vendor: 8086 ("Intel Corporation"), device: 10c9 ("82576 Gigabit Network Connection") vendor: 8086 ("Intel Corporation"), device: 10d6 ("82575GB Gigabit Network Connection") and many more... How to recognize the driver With the ethtool you can display the version and type of the driver. For example for the interface eth0. # ethtool -i eth0 driver: igbversion: 2.1.0-k2firmware-version: 3.2-9bus-info: 0000:02:00.0 Active RX multi queues - formula By default, Security Gateway calculates the number of active RX queues based on this formula: RX queues = [Total Number of CPU cores] - [Number of CoreXL FW instances] Configure Here I would refer to the following links: Performance Tuning R80.10 Administratio GuidePerformance Tuning R80.20 Administration Guide References Best Practices - Security Gateway Performance Multi-Queue does not work on 3200 / 5000 / 15000 / 23000 appliances when it is enabled for on-board interfacesPerformance Tuning R80.10 Administratio GuidePerformance Tuning R80.20 Administration Guide Intel:Download Intel® Network Adapter Virtual Function Driver for Intel® 10 Gigabit Ethernet Network Connections Download Network Adapter Driver for Gigabit PCI Based Network Connections for Linux* Download Intel® Network Adapter Driver for 82575/6, 82580, I350, and I210/211-Based Gigabit Network Connections for Linu…  LKDDb (Linux Kernel Driver Database):https://cateee.net/lkddb/web-lkddb/ Copyright by Heiko Ankenbrand  1994-2019

Demonstrating pause frames

Hello!I am trying to find a mysterious source of packet loss using my R80.10 JHF 225 gateways. The administrator of the access layer is saying their switch is receiving "pause frames" from the firewall and so it's dropping packets it cannot deliver in a timely manner. I am not sure how to evaluate this - from reading, it does not appear that they would necessarily show up in a packet capture. I've also read that those perhaps exclusively originate from an endpoint or a switch. I tried a tcpdump from the gateway and wireshark filter "macc.opcode == pause" - no results.In the specific scenario I am troubleshooting that I hope is indicative of the larger problem, an attempt to connect to an https server reliably gets SYN-SYN/ACK-ACK-Client Hello ... Client Hello ... RST (from server). We've seen it before with a QoS/CoS issue on our switch hardware.In searching for similar issues, I found https://community.checkpoint.com/t5/General-Topics/Ifconfig-dropped-explanation/m-p/24447#M4885 but ifconfig does not report any Rx or Tx errors, so our situation does not map well to that scenario.I'm not getting indications that the gateway is under any meaningful load, though cpview does show 195,627 "Instance High CPU" drops, though on a "Inbound Packets/sec" rate of around 70k.How can I determine whether the gateway is telling the switch to suspend passing packets? 

How to fully accelerate SIP RTP media streams using SecureXL

Hi, We deployed a relatively simple Check Point vSec security gateway as the perimeter firewall for a VoIP provider utilising SIP. Public IPs are routed directly to the servers so the only NAT rules apply to VPN clients. We have an ongoing case with TAC regarding SecureXL not forwarding traffic on kernel 3.10, hence the gateway being R80.30 kernel 2.16.18. We have Jumbo Hotfix Accumulator take 50 installed, as the most recent GA release. Architecture:VoIP server in VLAN with gateway pointing at Check Point security gatewayCheck Point security gateway has eth0 as internet upstream and eth1 in VoIP server VLANvSec gateway managed by external MDS environment is non-publicly routed subnet (management via eth0) What we've done thus far:Changed protocol objects to not reference SIP, disabling protocol inspection.Firewall blade policy set to use custom udp service object, rule 8.3Application and URL filtering blade policy set to allow all inbound (rule 1) and all outbound traffic originating from VoIP servers using custom udp service object, rule 8.1Threat Prevention policy exceptions have been definedDisabled Hyper Threading on the VM host and pinned guest VM cores to reserved physical cores, on CPU1 (attached to network interfaces) SIP RTP media udp service object details: Network (Firewall) blade policy layer:Application (Applications & URL Filtering) blade policy layer:Threat Prevention - Exceptions blade policy layer: SecureXL stats: [Expert@fwcp1:0]# fwaccel stat +-----------------------------------------------------------------------------+ |Id|Name |Status |Interfaces |Features | +-----------------------------------------------------------------------------+ |0 |SND |enabled |eth0,eth1 |Acceleration,Cryptography | | | | | |Crypto: Tunnel,UDPEncap,MD5, | | | | | |SHA1,NULL,3DES,DES,CAST, | | | | | |CAST-40,AES-128,AES-256,ESP, | | | | | |LinkSelection,DynamicVPN, | | | | | |NatTraversal,AES-XCBC,SHA256 | +-----------------------------------------------------------------------------+ Accept Templates : enabled Drop Templates : enabled NAT Templates : enabled [Expert@fwcp1:0]# fwaccel stats -s Accelerated conns/Total conns : 10/1882 (0%) Accelerated pkts/Total pkts : 2199407627/4400568146 (49%) F2Fed pkts/Total pkts : 6510799/4400568146 (0%) F2V pkts/Total pkts : 3514127/4400568146 (0%) CPASXL pkts/Total pkts : 0/4400568146 (0%) PSLXL pkts/Total pkts : 2194649720/4400568146 (49%) QOS inbound pkts/Total pkts : 0/4400568146 (0%) QOS outbound pkts/Total pkts : 0/4400568146 (0%) Corrected pkts/Total pkts : 0/4400568146 (0%) [Expert@fwcp1:0]# fwaccel stats Name Value Name Value ---------------------------- ------------ ---------------------------- ------------ Accelerated Path -------------------------------------------------------------------------------------- accel packets 2199474632 accel bytes 255604723479 outbound packets 2199468895 outbound bytes 255661260470 conns created 3331162 conns deleted 3329257 C total conns 1905 C TCP conns 29 C non TCP conns 1876 nat conns 0 dropped packets 26624 dropped bytes 2028392 fragments received 1280 fragments transmit 4 fragments dropped 0 fragments expired 0 IP options stripped 63 IP options restored 63 IP options dropped 0 corrs created 0 corrs deleted 0 C corrections 0 corrected packets 0 corrected bytes 0 Accelerated VPN Path -------------------------------------------------------------------------------------- C crypt conns 0 enc bytes 0 dec bytes 0 ESP enc pkts 0 ESP enc err 0 ESP dec pkts 0 ESP dec err 0 ESP other err 0 espudp enc pkts 0 espudp enc err 0 espudp dec pkts 0 espudp dec err 0 espudp other err 0 Medium Streaming Path -------------------------------------------------------------------------------------- CPASXL packets 0 PSLXL packets 2194716725 CPASXL async packets 0 PSLXL async packets 2194691770 CPASXL bytes 0 PSLXL bytes 253353244667 C CPASXL conns 0 C PSLXL conns 1895 CPASXL conns created 0 PSLXL conns created 3330706 PXL FF conns 0 PXL FF packets 0 PXL FF bytes 0 PXL FF acks 0 PXL no conn drops 0 Inline Streaming Path -------------------------------------------------------------------------------------- PSL Inline packets 0 PSL Inline bytes 0 CPAS Inline packets 0 CPAS Inline bytes 0 QoS Paths -------------------------------------------------------------------------------------- QoS General Information: ------------------------ Total QoS Conns 0 QoS Classify Conns 0 QoS Classify flow 0 Reclassify QoS policy 0 FireWall QoS Path: ------------------ Enqueued IN packets 0 Enqueued OUT packets 0 Dequeued IN packets 0 Dequeued OUT packets 0 Enqueued IN bytes 0 Enqueued OUT bytes 0 Dequeued IN bytes 0 Dequeued OUT bytes 0 Accelerated QoS Path: --------------------- Enqueued IN packets 0 Enqueued OUT packets 0 Dequeued IN packets 0 Dequeued OUT packets 0 Enqueued IN bytes 0 Enqueued OUT bytes 0 Dequeued IN bytes 0 Dequeued OUT bytes 0 Firewall Path -------------------------------------------------------------------------------------- F2F packets 6510843 F2F bytes 4112863976 TCP violations 9 F2V conn match pkts 13981 F2V packets 3514178 F2V bytes 1410988147 GTP -------------------------------------------------------------------------------------- gtp tunnels created 0 gtp tunnels 0 gtp accel pkts 0 gtp f2f pkts 0 gtp spoofed pkts 0 gtp in gtp pkts 0 gtp signaling pkts 0 gtp tcpopt pkts 0 gtp apn err pkts 0 General -------------------------------------------------------------------------------------- memory used 792 C tcp handshake conns 0 C tcp established conns 25 C tcp closed conns 4 C tcp pxl handshake conns 0 C tcp pxl established conns 25 C tcp pxl closed conns 4 outbound cpasxl packets 0 outbound pslxl packets 0 outbound cpasxl bytes 0 outbound pslxl bytes 0 DNS DoR stats 0 (*) Statistics marked with C refer to current value, others refer to total value Resource utilisation is very high, with two CoreXL instances and only 6 Mbps traffic: |------------------------------------------------------------------------------| | CPVIEW.Overview 15Nov2019 9:42:49 | |------------------------------------------------------------------------------| | Overview SysInfo Network CPU I/O Software-blades Hardware-Health Advanced | |------------------------------------------------------------------------------| | CPU: | | | | Num of CPUs: 2 | | | | CPU Used | | 0 93% | | 1 58% | | ---------------------------------------------------------------------------- | | Memory: | | | | Total MB Used MB Free MB | | Physical 3,815 1,842 1,973 | | FW Kernel 3,052 785 2,267 | | Swap 4,095 0 4,095 | | ---------------------------------------------------------------------------- | | Network: | | | | Bits/sec 8,950K | | Packets/sec 15,889 | | Connections/sec 17 | | Concurrent connections 1,931 | | ---------------------------------------------------------------------------- | | Disk space (top 3 used partitions): | | | | Partition Total MB Used MB Free MB | | / 15,558 6,323 8,521 | | /boot 288 23 250 | | /var/log 19,806 876 17,908 | | ---------------------------------------------------------------------------- | | Events: | | | | # of monitored daemons crashes since last cpstart 0 | | | |------------------------------------------------------------------------------| Load average:CPU utilisation:Network throughput:

High CPU on Multi Queue Cores

Hardware: 13800 with 20 cores, 8/12 Split, no SMT.OS: R80.20 Take 47Blades enabled: None (just FW/VPN).MQ is enabled on two 10g interfaces. The 4 CPU cores tied to these interfaces are running 75-85%, spikes up to 95%. One core is tied with fwd. The other 3 SND's are running 1-2%. Workers are running around 50%.From Cpview:Bandwidth 4-5 Gbps800k-900K packets/sec, 10K conn/sec.Netstat -ni is NOT showing any drops.[Expert@13800:0]# fwaccel stats -sAccelerated conns/Total conns : (-3%)Accelerated pkts/Total pkts : (51%)F2Fed pkts/Total pkts : (4%)F2V pkts/Total pkts : (1%)CPASXL pkts/Total pkts : (0%)PSLXL pkts/Total pkts : (44%)Question: what could be a reason for 44% PSLXL pkts/Total pkts?What can be done to reduce load on the first 4 cores?
HeikoAnkenbrand
HeikoAnkenbrand inside Enterprise Appliances and Gaia OS Friday
views 300270 217 329

R80.x Ports Used for Communication by Various Check Point Modules

Introduction This drawing should give you an overview of the used R80 and R77 ports respectively communication flows. It should give you an overview of how different Check Point modules communicate with each other. Furthermore, services that are used for firewall operation are also considered. These firewall services are also partially mapped as implied rules in the set on the firewall. Overview Chapter More interesting articles: - R80.x Architecture and Performance Tuning - Link Collection- Article list (Heiko Ankenbrand) References Support Center: Ports used by Check Point software  Versions   +v1.5a typos corrected 18.09.2019 old version 1.4:+ v1.4a bug fix, update port 1701 udp L2TP 09.04.2018+ v1.4b bug fix 15.04.2018+ v1.4c CPUSE update 17.04.2018+ v1.4d legend fixed 17.04.2018+ v1.4e add SmartLog and SmartView on port 443 20.04.2018+ v1.4f bug fix 21.05.2018+ v1.4g bug fix 25.05.2018+ v1.4h add Backup ports 21, 22, 69 UDP and ClusterXL full sync port 256  30.05.2018+ v1.4i add port 259 udp VPN link probeing 12.06.2018+ v1.4j bug fix 17.06.2018+ v1.4k add  OSPF/BGP route Sync 25.06.2018+ v1.4l bug fix routed 29.06.2018+ v1.4m bug fix tcp/udp ports 03.07.2018+ v1.4n add port 256 13.07.2018+ v1.4o bug fix / add TE ports 27.11.2018+ v1.4p bug fix routed port 2010 23.01.2019+ v1.4q change to new forum format 16.03.2019 old version 1.3:+ v1.3a new designe (blue, gray), bug fix, add netflow, new names 27.03.2018+ v1.3b add routing ports, bug fix designe 28.03.2018+ v1.3c bug fix, rename ports (old) 29.03.2018+ v1.3d bug fix 30.03.2018+ v1.3e fix issue L2TP UDP port 1701 old version 1.1:+ v1.1a - added r80.xx ports 16.03.2018+ v1.1b - bug in drawing fixed 17.03.2018+ v1.1c - add RSA, TACACS, Radius 19.03.2018+ v1.1d - add 900, 259 Client-auth - deleted od 4.0 ports 20.03.2018+ v1.1e - add OPSEC -delete R55 ports 21.03.2018+ v1.1f - bug fix 22.03.2018+ v1.1g - bug fix - add mail smtp -add dhcp - add snmp 25.03.2018 Copyright by Heiko Ankenbrand  1994-2019    

Sync interface migration

Hello everyone,I have to migrate a sync link from its current interface "eth3" to the interface "eth2".Is there a best practice to avoid some side-effects like an Active/Active situation ?Thanks to your expertise !Trif