cancel
Showing results for 
Search instead for 
Did you mean: 
Create a Post

Netflow for R80.10

Anyone ever send Netflow data to Stealthwatch, I'm can't find any data sheet that list the collectors that are compatible with Checkpoint Firewall.
HeikoAnkenbrand
HeikoAnkenbrand inside Enterprise Appliances and Gaia OS yesterday
views 10833 55 39

GAIA - Easy execute CLI commands on all gateways simultaneously

Now you can use the new command "gw_mbash" and "g_mclish" to execute bash or clish commands on all gateway simultaneously from the management server. All you have to do is copy and paste the above lines to the management server. After that you have two new commands on the management server. Here you can now centrally execute simple commands on all gateways which are connected via SIC with the management. Attention! You can quickly destroy your gateways if you enter the wrong commands! Command syntax: Command Description # gw_detect # gw_detect80 Detect all your gateways that support from this tool. This command only needs to be executed once or when gateways changed in topology.All founded gateways are stored as IP address in this file /var/log/g_gateway.txt. All added IP addresses will be used later to execute commands on these gateways. The file can also be edit manually to add gateway IP adressess. The execution of this command may take a few minutes. Use this command on R80.x gateways "gw_detect80" is a little bit faster. Use this command on R77.x gateways "gw_detect". # gw_mbash <command> Execute expert mode command on all gateway simultaneously # gw_mclish <command> Execute clish command on all gateway simultaneously An example! You want see the version of all gateway they are defined in the topology. Management# gw_detect -> start this command fist to detect all your supported gateways or "gw_detect80" on R80.x gatewaysManagement# gw_mclish show version os edition -> execute this command on all gateways Now the command "show version os edition" is executed on all gateways and the output is displayed on the management server sorted according to the ip addresses of the gateways in the firewall topologie. The same also works for the expert mode. For example: Management# gw_detect -> start this command fist to detect all your supported gateways or "gw_detect80" on R80.x gatewaysManagement# gw_mbash fw ver -> execute this command on all gateways Tip 1 Use this command to backup your clish configs from all gateways. Management# gw_mclish show configuration > backup_clish_all_gateways.txt This can also be start as simply cronjob😀. Tip 2 Check central performance settings for all gateways: Management# gw_mbash fw tab -t connections -s -> show state table for all gateways Management# gw_mbash fwaccel stat -> show fwaccel state's for all gatewaysManagement# gw_mbash ips stat -> check on witch gateway ips is enabled ... Cppy and paste this lines to the management server or download the script "new_multi_commands.sh" and execute the script. echo '#!/bin/bash' > /usr/local/bin/gw_mbash echo 'if [ ! -f /var/log/g_gateway.txt ]; then' >> /usr/local/bin/gw_mbash echo 'echo "First start \"gw_detect\" and\or edit the file \var\log\gw_gateway.txt manually. Add here all your gateway IP addresses."' >> /usr/local/bin/gw_mbash echo 'else' >> /usr/local/bin/gw_mbash echo 'HAtest="$@"' >> /usr/local/bin/gw_mbash echo 'echo $HAtest > /var/log/g_command.txt;' >> /usr/local/bin/gw_mbash echo 'while read line' >> /usr/local/bin/gw_mbash echo 'do' >> /usr/local/bin/gw_mbash echo 'if $CPDIR/bin/cprid_util getarch -server $line |grep "gaia" > /dev/null;' >> /usr/local/bin/gw_mbash echo 'then' >> /usr/local/bin/gw_mbash echo 'echo "--------- GAIA $line execute command: $HAtest"' >> /usr/local/bin/gw_mbash echo '$CPDIR/bin/cprid_util -server $line putfile -local_file /var/log/g_command.txt -remote_file /var/log/g_command.txt;' >> /usr/local/bin/gw_mbash echo '$CPDIR/bin/cprid_util -server $line -verbose rexec -rcmd /bin/bash -f /var/log/g_command.txt' >> /usr/local/bin/gw_mbash echo 'else' >> /usr/local/bin/gw_mbash echo 'echo "--------- STOP $line Error: no SIC to gateway or no compatible gateway"' >> /usr/local/bin/gw_mbash echo 'fi' >> /usr/local/bin/gw_mbash echo 'done < /var/log/g_gateway.txt' >> /usr/local/bin/gw_mbash echo 'fi' >> /usr/local/bin/gw_mbash chmod +x /usr/local/bin/gw_mbash echo '#!/bin/bash' > /usr/local/bin/gw_mclish echo 'if [ ! -f /var/log/g_gateway.txt ]; then' >> /usr/local/bin/gw_mclish echo 'echo "First start \"gw_detect\" and\or edit the file \var\log\gw_gateway.txt manually. Add here all your gateway IP addresses."' >> /usr/local/bin/gw_mclish echo 'else' >> /usr/local/bin/gw_mclish echo 'HAtest="$@"' >> /usr/local/bin/gw_mclish echo 'echo $HAtest > /var/log/g_command.txt;' >> /usr/local/bin/gw_mclish echo 'while read line' >> /usr/local/bin/gw_mclish echo 'do' >> /usr/local/bin/gw_mclish echo 'if $CPDIR/bin/cprid_util getarch -server $line |grep "gaia" > /dev/null;' >> /usr/local/bin/gw_mclish echo 'then' >> /usr/local/bin/gw_mclish echo 'echo "--------- GAIA $line execute command: $HAtest"' >> /usr/local/bin/gw_mclish echo '$CPDIR/bin/cprid_util -server $line putfile -local_file /var/log/g_command.txt -remote_file /var/log/g_command.txt;' >> /usr/local/bin/gw_mclish echo '$CPDIR/bin/cprid_util -server $line -verbose rexec -rcmd /bin/clish -f /var/log/g_command.txt' >> /usr/local/bin/gw_mclish echo 'else' >> /usr/local/bin/gw_mclish echo 'echo "--------- STOP $line Error: no SIC to gateway or no compatible gateway"' >> /usr/local/bin/gw_mclish echo 'fi' >> /usr/local/bin/gw_mclish echo 'done < /var/log/g_gateway.txt' >> /usr/local/bin/gw_mclish echo 'fi' >> /usr/local/bin/gw_mclish chmod +x /usr/local/bin/gw_mclish echo '#!/bin/bash' > /usr/local/bin/gw_detect echo 'echo -n > /var/log/g_gateway.txt' >> /usr/local/bin/gw_detect echo "more $FWDIR/conf/objects.C |grep -A 500 -B 1 ':type (gateway)'| sed -n '/gateway/,/:ipaddr (/p' | grep 'ipaddr (' | sed 's/^[ \t]*//' | sed 's/\:ipaddr (//' |sed 's/)//' > /var/log/g_gwl.txt" >> /usr/local/bin/gw_detect echo 'while read line' >> /usr/local/bin/gw_detect echo 'do' >> /usr/local/bin/gw_detect echo 'if $CPDIR/bin/cprid_util getarch -server $line |grep "gaia" > /dev/null;' >> /usr/local/bin/gw_detect echo 'then' >> /usr/local/bin/gw_detect echo 'echo "--------- GAIA $line "' >> /usr/local/bin/gw_detect echo 'echo "$line" >> /var/log/g_gateway.txt' >> /usr/local/bin/gw_detect echo 'else' >> /usr/local/bin/gw_detect echo 'echo "--------- STOP no SIC to gateway or no compatible gateway"' >> /usr/local/bin/gw_detect echo 'fi' >> /usr/local/bin/gw_detect echo 'done < /var/log/g_gwl.txt' >> /usr/local/bin/gw_detect chmod +x /usr/local/bin/gw_detect echo '#!/bin/bash' > /usr/local/bin/gw_detect80 echo 'echo -n > /var/log/g_gateway.txt' >> /usr/local/bin/gw_detect80 echo "mgmt_cli -r true show gateways-and-servers details-level full --format json | $CPDIR/jq/jq -r '.objects[] | select(.type | contains(\"Member\",\"simple-gateway\")) | .\"ipv4-address\"' |grep -v null|grep -v 0.0. > /var/log/g_gwl.txt" >> /usr/local/bin/gw_detect80 echo 'while read line' >> /usr/local/bin/gw_detect80 echo 'do' >> /usr/local/bin/gw_detect80 echo 'if $CPDIR/bin/cprid_util getarch -server $line |grep "gaia" > /dev/null;' >> /usr/local/bin/gw_detect80 echo 'then' >> /usr/local/bin/gw_detect80 echo 'echo "--------- GAIA $line "' >> /usr/local/bin/gw_detect80 echo 'echo "$line" >> /var/log/g_gateway.txt' >> /usr/local/bin/gw_detect80 echo 'else' >> /usr/local/bin/gw_detect80 echo 'echo "--------- STOP no SIC to gateway or no compatible gateway"' >> /usr/local/bin/gw_detect80 echo 'fi' >> /usr/local/bin/gw_detect80 echo 'done < /var/log/g_gwl.txt' >> /usr/local/bin/gw_detect80 chmod +x /usr/local/bin/gw_detect80 Versions:v0.1 - 04-14-2019 - gw_multi_commands_v0.1.sh -> betav0.2 - 04-16-2019 - gw_multi_commands_v0.2.sh -> remove bugsv0.3 - 04-17-2019 - gw_multi_commands_v0.3.sh -> split to two commands (gw_detect and the old commands)v0.4 - 05-05-2019 - gw_multi_commands_v0.4.sh -> add command "gw_detect80" Video tutorial: LITHIUM.OoyalaPlayer.addVideo('https:\/\/player.ooyala.com\/static\/v4\/production\/', 'lia-vid-9wdnRtaDE62K43G6H0BgrmwVXzp0YJzvw822h520r901', '9wdnRtaDE62K43G6H0BgrmwVXzp0YJzv', {"pcode":"kxN24yOtRYkiJthl3FdL1eXcRmh_","playerBrandingId":"ODI0MmQ3NjNhYWVjODliZTgzY2ZkMDdi","width":"822px","height":"520px"});(view in My Videos) Copyright by Heiko Ankenbrand 1996-2019

R80.30 - DELL R740

 DELL R740 is compatible with R80.30 ??

GAIA - Backup all clish configs from all gateways with one CLI command.

In the last few days I have built a tool to execute clish commands and bash commands centrally from the management server on all gateways. All you need to do is run a small script from this Checkmates article. GAIA - Easy execute CLI commands on all gateways simultaneously Then you can save your clish config centrally from all gateways to the management server with the following command: Management# gw_mclish show configuration > backup_clish_all_gateways.txt This can also be start as simply cronjob😀. Demo video: LITHIUM.OoyalaPlayer.addVideo('https:\/\/player.ooyala.com\/static\/v4\/production\/', 'lia-vid-lsNm5uaDE61TnsI2-3lkcwxHdtazzSv-w660h418r810', 'lsNm5uaDE61TnsI2-3lkcwxHdtazzSv-', {"pcode":"kxN24yOtRYkiJthl3FdL1eXcRmh_","playerBrandingId":"ODI0MmQ3NjNhYWVjODliZTgzY2ZkMDdi","width":"660px","height":"418px"});(view in My Videos)
HeikoAnkenbrand
HeikoAnkenbrand inside Enterprise Appliances and Gaia OS yesterday
views 28722 16 74

R80.x Performance Tuning Tip – Multi Queue

What is Multi Queue? It is an acceleration feature that lets you assign more than one packet queue and CPU to an interface. When most of the traffic is accelerated by the SecureXL, the CPU load from the CoreXL SND instances can be very high, while the CPU load from the CoreXL FW instances can be very low. This is an inefficient utilization of CPU capacity. By default, the number of CPU cores allocated to CoreXL SND instances is limited by the number of network interfaces that handle the traffic. Because each interface has one traffic queue, only one CPU core can handle each traffic queue at a time. This means that each CoreXL SND instance can use only one CPU core at a time for each network interface. Check Point Multi-Queue lets you configure more than one traffic queue for each network interface. For each interface, you can use more than one CPU core (that runs CoreXL SND) for traffic acceleration. This balances the load efficiently between the CPU cores that run the CoreXL SND instances and the CPU cores that run CoreXL FW instances. Important - Multi-Queue applies only if SecureXL is enabled. Chapter Architecture:R80.x Security Gateway Architecture (Logical Packet Flow)R80.x Security Gateway Architecture (Content Inspection) R80.x Security Gateway Architecture (Acceleration Card Offloading) R80.x Ports Used for Communication by Various Check Point Modules Performance Tuning:R80.x Performance Tuning Tip - AES-NI R80.x Performance Tuning Tip - SMT (Hyper Threading) R80.x Performance Tuning Tip - Multi Queue R80.x Performance Tuning Tip - Connection Table R80.x Performance Tuning Tip - fw monitorR80.x Performance Tuning Tip - TCPDUMP vs. CPPCAP R80.x Performance Tuning Tip – DDoS „fw sam“ vs. „fwaccel dos“ Cheat Sheet:R80.x cheat sheet - fw monitor R80.x cheat sheet - ClusterXL More interesting articles:Article list (Heiko Ankenbrand) Multi-Queue Requirements and Limitations Tip 1 Multi-Queue is not supported on computers with one CPU core. Network interfaces must use the driver that supports Multi-Queue. Only network cards that use the igb (1Gb), ixgbe (10Gb), i40e (40Gb), or mlx5_core (40Gb) drivers support the Multi-Queue. You can configure a maximum of five interfaces with Multi-Queue. You must reboot the Security Gateway after all changes in the Multi-Queue configuration. For best performance, it is not recommended to assign both SND and a CoreXL FW instance to the same CPU core. Do not change the IRQ affinity of queues manually. Changing the IRQ affinity of the queues manually can adversely affect performance. Multi-Queue is relevant only if SecureXL and CoreXL is enabled. Do not change the IRQ affinity of queues manually. Changing the IRQ affinity of the queues manually can adversely affect performance. You cannot use the “sim affinity” or the “fw ctl affinity” commands to change and query the IRQ affinity of the Multi-Queue interfaces. The number of queues is limited by the number of CPU cores and the type of interface driver: Network card driver Speed Maximal number of RX queues igb 1 Gb 4 ixgbe 10 Gb 16 i40e 40 Gb 14 mlx5_core 40 Gb 10 The maximum RX queues limit dictates the largest number of SND/IRQ instances that can empty packet buffers for an individual interface using that driver that has Multi-Queue enabled. Multi-Queue does not work on 3200 / 5000 / 15000 / 23000 appliances in the following scenario (sk114625) MQ is enabled for on-board interfaces (e.g., Mgmt, Sync) the number of active RX queues was set to either 3 or 4 (with cpmq set rx_num igb <number> command) This problem was fixed in: Check Point R80.10 Jumbo Hotfix Accumulator for R77.30 - since Take_198 The number of traffic queues is limited by the number of CPU cores and the type of network interface card driver.The on-board interfaces on these appliances use the igb driver, which supports up to 4 RX queues.However, the I211 controller on these on-board interfaces supports only up to 2 RX queues. When Multi-Queue will not help Tip 2 When most of the processing is done in CoreXL - either in the Medium path, or in the Firewall path (Slow path). All current CoreXL FW instances are highly loaded, so there are no CPU cores that can be reassigned to SecureXL. When IPS, or other deep inspection Software Blades are heavily used. When all network interface cards are processing the same amount of traffic. When all CPU cores that are currently used by SecureXL are congested. When trying to increase traffic session rate. When there is not enough diversity of traffic flows. In the extreme case of a single flow, for example, traffic will be handled only by a single CPU core. (Clarification: The more traffic is passing to/from different ports/IP addresses, the more you benefit from Multi-Queue. If there is a single traffic flow from a single Client to a single Server, then Multi-Queue will not help.) Multi-Queue is recommended Load on CPU cores that run as SND is high (idle < 20%). Load on CPU cores that run CoreXL FW instances is low (idle > 50%). There are no CPU cores left to be assigned to the SND by changing interface affinity. Multi-Queue support on Appliance vs. Open Server Gateway type Network interfaces that support the Multi-Queue Check Point Appliance MQ is supported on all applications that use the following drivers igb, ixgbe, i40e, mlx5_core. These expansion line cards for 4000, 12000, and 21000 appliances support the Multi-Queue: CPAC-ACC-4-1C CPAC-ACC-4-1F CPAC-ACC-8-1C CPAC-ACC-2-10F CPAC-ACC-4-10F This expansion line card for 5000, 13000, and 23000 appliances supports the Multi-Queue: · CPAC-2-40F-B Open Server Network cards that use igb (1Gb), ixgbe (10Gb), i40e (40Gb), or mlx5_core (40Gb) drivers support the Multi-Queue. Multi-Queue support on Open Server (Intel Network Cards) Tip 3 The following list shows an overview with all Intel cards from Check Point HCL for open server from 11/21/2018. The list is cross-referenced to the Intel drivers. I do not assume any liability for the correctness of the information. These lists should only be used to help you find the right drivers. It is not an official document of Check Point! So please always read the official documents of Check Point. Intel network card Ports Chipset PCI ID Driver PCI Speed MQ 10 Gigabit AT 1 82598EB 8086:25e7 ixgbe PCI-E 10G Copper yes 10 Gigabit CX4 2 82598EB 8086:10ec ixgbe PCI-E 10G Copper yes 10 Gigabit XF family (Dual and Single Port models, SR and LR) 2 82598 8086:10c6 Ixgbe PCI-E 10G Fiber yes Ethernet Converged Network Adapter X540-T2 2 X540 8086:1528 ixgbe PCI-E 100/1G/10GCopper yes Ethernet Server Adapter I340-T2 2 82580 - Igb PCI-E 10/100/1GCopper yes Ethernet Server Adapter I340-T4 2 82580 - Igb PCI-E 10/100/1G Copper yes Ethernet Server Adapter X520 X520-SR2, X520-SR1, X520-LR1, X520-DA2 2 X520 - ixgbe PCI-E 10G Fiber yes Gigabit VT Quad Port Server Adapter 4 82575GB 8086:10d6 igb PCI-E 10/100/1G Copper yes Intel Gigabit ET2 Quad Port Server Adapter 4 - igb PCI-E 1G Copper yes PRO/10GbE CX4 1 82597EX 8086:109e Ixgb PCI-X 10G Copper no PRO/10GbE LR 1 82597EX 8086:1b48 Ixgb PCI-X 10G Fiber no PRO/10GbE SR 1 82597EX 8086:1a48 Ixgb PCI-X 10G Fiber no PRO/1000 Dual 82546GB 2 82546GB 8086:108a E1000 PCI-E 10/100/1G Copper no Pro/1000 EF Dual 2 82576 8086:10e6 Igb ? PCI-E 1G Fiber yes ? Pro/1000 ET Dual port Server Adapter 2 82576 igb PCI-E 1G Copper yes PRO/1000 ET Quad Port Server Adapter 4 82576 8086:10e8 Igb PCI-E 10/100/1G Copper yes PRO/1000 GT Quad 4 82546 8086:10b5 E1000 PCI-X 10/100/1G Copper no PRO/1000 MF 1 82546 ? 82545 ? - E1000 PCI-X 1G Fiber no PRO/1000 MF (LX) 1 82546 ? 82545 ? - E1000 PCI-X 1G Fiber no PRO/1000 MF Dual 2 82546 ? 82545 ? - E1000 PCI-X 1G Fiber no PRO/1000 MF Quad 4 82546 ? 82545 ? - E1000 PCI-X 1G Fiber no PRO/1000 PF 1 82571 ? 8086:107e E1000 PCI-E 1G Fiber no PRO/1000 PF Dual 2 82571 ? 8086:115f E1000 PCI-E 1G Fiber no PRO/1000 PF Quad Port Server Adapter 4 82571 ? 8086:10a5 E1000 PCI-E 1G Fiber no PRO/1000 PT 1 82571 8086:1082 E1000 PCI-E 10/100/1G Copper no PRO/1000 PT Dual 2 82571 8086:105e E1000 PCI-E 10/100/1G Copper no PRO/1000 PT Dual UTP 2 82571 8086:108a E1000 PCI-E 10/100/1G Copper no PRO/1000 PT Quad 4 82571 8086:10a4 E1000 PCI-E 10/100/1G Copper no PRO/1000 PT Quad Low Profile 4 82571 8086:10bc E1000 PCI-E 10/100/1G Copper no PRO/1000 XF 1 82544 E1000 PCI-X 1G Fiber no For all "?" I could not clarify the points exactly. Multi-Queue support on Open Server (HP and IBM Network Cards) Tip 4 The following list shows an overview with all HP cards from Check Point HCL for open server from 11/22/2018. The list is cross-referenced to the Intel drivers. I do not assume any liability for the correctness of the information. These lists should only be used to help you find the right drivers. It is not an official document of Check Point! So please always read the official documents of Check Point. HP network card Ports Chipset PCI ID Driver PCI Speed MQ Ethernet 1Gb 4-port 331T 4 BCM5719 14e4:1657 tg3 PCI-E 1G Copper no Ethernet 1Gb 4-port 366FLR 4 Intel I350 8086:1521 igb PCI-E 1G Copper yes Ethernet 1Gb 4-port 366T 4 Intel I350 8086:1521 igb PCI-E 1G Copper yes Ethernet 10Gb 2-port 560SFP+ 2 Intel 82599EB 0200: 8086:10fb ixgbe PCI-E 10G Fiber yes Ethernet 10Gb 2-port 561FLR-T 2 Intel X540-AT2 8086:1528 ixgbe PCI-E 10G Copper yes HPE Ethernet 10Gb 2-port 562FLR-SFP+ 2 Intel X710 8086:1572 i40e PCI-E 10G Copper yes Ethernet 10Gb 2-port 561T 2 Intel X540-AT2 8086:1528 ixgbe PCI-E 10G Copper yes NC110T 1 Intel 82572GI 8086:10b9 E1000 PCI-E 10/100/1G Copper no NC320T 1 BCM5721 KFB 14e4:1659 tg3 PCI-E 10/100/1G Copper no NC325m Quad Port 4 BCM5715S 14e4:1679 tg3 PCI-E 1G Copper no NC326m PCI Express Dual Port 1Gb Server Adapter for c-Class Blade System 2 BCM5715S tg3 PCI-E 1G Copper no NC340T 4 Intel 82546GB 8086:10b5 E1000 PCI-X 10/100/1G Copper no NC360T 2 Intel 82571EB 8086:105e E1000 PCI-E 10/100/1G Copper no NC364T Official site 4 Intel 82571EB 8086:10bc E1000 PCI-E 10/100/1G Copper no NC365T PCI Express Quad Port 4 Intel82580 8086:150e igb PCI-E 10/100/1G Copper yes NC373F 1 Broadcom 5708 14e4:16ac bnx2 PCI-E 1G Copper no NC373m Dual Port 2 BCM5708S 14e4:16ac bnx2 PCI-E 10/100/1G Copper no NC373T 1 Broadcom 5708 14e4:16ac bnx2 PCI-E 10/100/1G Copper no NC380T PCI Express Dual Port Multifunction Gigabit server 2 BCM5706 - bnx2 PCI-E 10/100/1G Copper no NC522SFP Dual Port 10GbE Server Adapter 2 NX3031 4040:0100 ??? PCI-E 10G Fiber no NC550SFP Dual Port 10GbE Server Adapter Official site 2 Emulex OneConn 19a2:0700 be2net PCI-E 10G Fiber no NC552SFP 10GbE 2-port Ethernet Server 2 Emulex OneConn 19a2:0710 be2net PCI-E 10G Fiber no NC7170 2 Intel 82546EB 8086:1010 E1000 PCI-X 10/100/1G Copper no For all "?" I could not clarify the points exactly. IBM network card Ports Chipset PCI ID Driver PCI Speed MQ Broadcom 10Gb 4-Port Ethernet Expansion Card (CFFh) for IBM BladeCenter 4 BCM57710 bnx2x PCI-E 10G Fiber no Broadcom NetXtreme Quad Port GbE network Adapter 4 I350 igb PCI-E 1G Copper yes NetXuleme 1000T 1 ??? (1) ??? PCI-X 10/100/1G Copper ??? NetXuleme 1000T Dual 2 ??? (1) ??? PCI-X 10/100/1G Copper ??? PRO/1000 PT Dual Port Server Adapter 2 82571GB E1000 PCI-E 10/100/1G Copper no (1) These network cards can't even be found at Goggle. Notes to Intel igb and ixgbe driver I used the LKDDb Database to identify the drivers. LKDDb is an attempt to build a comprensive database of hardware and protocols know by Linux kernels. The driver database includes numeric identifiers of hardware, the kernel configuration menu needed to build the driver and the driver filename. The database is build automagically from kernel sources, so it is very easy to have always the database updated. This was the basis of the cross-reverence between Check Point HCL and Intel drivers. Link to LKDDb web database:https://cateee.net/lkddb/web-lkddb/ Link to LKDDb database driver: igb, ixgbe, i40e, mlx5_core Here you can find the following output for all drivers e.g. igb: Numeric ID (from LKDDb) and names (from pci.ids) of recognized devices: vendor: 8086 ("Intel Corporation"), device: 0438 ("DH8900CC Series Gigabit Network Connection") vendor: 8086 ("Intel Corporation"), device: 10a9 ("82575EB Gigabit Backplane Connection") vendor: 8086 ("Intel Corporation"), device: 10c9 ("82576 Gigabit Network Connection") vendor: 8086 ("Intel Corporation"), device: 10d6 ("82575GB Gigabit Network Connection") and many more... How to recognize the driver With the ethtool you can display the version and type of the driver. For example for the interface eth0. # ethtool -i eth0 driver: igbversion: 2.1.0-k2firmware-version: 3.2-9bus-info: 0000:02:00.0 Active RX multi queues - formula By default, Security Gateway calculates the number of active RX queues based on this formula: RX queues = [Total Number of CPU cores] - [Number of CoreXL FW instances] Configure Here I would refer to the following links: Performance Tuning R80.10 Administratio GuidePerformance Tuning R80.20 Administration Guide References Best Practices - Security Gateway Performance Multi-Queue does not work on 3200 / 5000 / 15000 / 23000 appliances when it is enabled for on-board interfacesPerformance Tuning R80.10 Administratio GuidePerformance Tuning R80.20 Administration Guide Intel:Download Intel® Network Adapter Virtual Function Driver for Intel® 10 Gigabit Ethernet Network Connections Download Network Adapter Driver for Gigabit PCI Based Network Connections for Linux* Download Intel® Network Adapter Driver for 82575/6, 82580, I350, and I210/211-Based Gigabit Network Connections for Linu… LKDDb (Linux Kernel Driver Database):https://cateee.net/lkddb/web-lkddb/ Copyright by Heiko Ankenbrand 1994-2019

OSFP Route during cpstop

hey,is there any option to keep routes learned by OSPF during cpstop?thanks

High dispatcher cpu

I've had ongoing issues since moving to R80.x with high dispatcher core usage. We have 21800 gateways . We've had multiple TAC cases, they had us add a 3rd dispatcher core and set the affinity for the cores manually, but i still consistently see the 10Gb interfaces spike the dispatcher CPU's in what should be low utilization situations for this model gw. We are on R80.20 jumbo 87, gateways were freshly reinstalled direct on R80.20 after continuous crashes across both cluster members in the last few days (will not be upgrading cluster in place again)..priority queues are off per TAC.any ideas would be appreciated... put some info below for context CPU 0: eth1-02 eth1-04CPU 1: eth3-02 eth1-01 eth1-03CPU 2: eth3-01 eth3-03 eth3-04 MgmtCPU 3: fw_16in.acapd fgd50 lpd cp_file_convertd rtmd usrchkd rad in.geod mpdaemon in.msd wsdnsd pdpd pepd in.asessiond vpnd fwd cpd cpridCPU 4: fw_15in.acapd fgd50 lpd cp_file_convertd rtmd usrchkd rad in.geod mpdaemon in.msd wsdnsd pdpd pepd in.asessiond vpnd fwd cpd cpridCPU 5: fw_14in.acapd fgd50 lpd cp_file_convertd rtmd usrchkd rad in.geod mpdaemon in.msd wsdnsd pdpd pepd in.asessiond vpnd fwd cpd cpridCPU 6: fw_13in.acapd fgd50 lpd cp_file_convertd rtmd usrchkd rad in.geod mpdaemon in.msd wsdnsd pdpd pepd in.asessiond vpnd fwd cpd cpridCPU 7: fw_12in.acapd fgd50 lpd cp_file_convertd rtmd usrchkd rad in.geod mpdaemon in.msd wsdnsd pdpd pepd in.asessiond vpnd fwd cpd cpridCPU 8: fw_11in.acapd fgd50 lpd cp_file_convertd rtmd usrchkd rad in.geod mpdaemon in.msd wsdnsd pdpd pepd in.asessiond vpnd fwd cpd cpridCPU 9: fw_10in.acapd fgd50 lpd cp_file_convertd rtmd usrchkd rad in.geod mpdaemon in.msd wsdnsd pdpd pepd in.asessiond vpnd fwd cpd cpridCPU 10: fw_9in.acapd fgd50 lpd cp_file_convertd rtmd usrchkd rad in.geod mpdaemon in.msd wsdnsd pdpd pepd in.asessiond vpnd fwd cpd cpridCPU 11: fw_8in.acapd fgd50 lpd cp_file_convertd rtmd usrchkd rad in.geod mpdaemon in.msd wsdnsd pdpd pepd in.asessiond vpnd fwd cpd cpridCPU 12: fw_7in.acapd fgd50 lpd cp_file_convertd rtmd usrchkd rad in.geod mpdaemon in.msd wsdnsd pdpd pepd in.asessiond vpnd fwd cpd cpridCPU 13: fw_6in.acapd fgd50 lpd cp_file_convertd rtmd usrchkd rad in.geod mpdaemon in.msd wsdnsd pdpd pepd in.asessiond vpnd fwd cpd cpridCPU 14: fw_5in.acapd fgd50 lpd cp_file_convertd rtmd usrchkd rad in.geod mpdaemon in.msd wsdnsd pdpd pepd in.asessiond vpnd fwd cpd cpridCPU 15: fw_4in.acapd fgd50 lpd cp_file_convertd rtmd usrchkd rad in.geod mpdaemon in.msd wsdnsd pdpd pepd in.asessiond vpnd fwd cpd cpridCPU 16: fw_3in.acapd fgd50 lpd cp_file_convertd rtmd usrchkd rad in.geod mpdaemon in.msd wsdnsd pdpd pepd in.asessiond vpnd fwd cpd cpridCPU 17: fw_2in.acapd fgd50 lpd cp_file_convertd rtmd usrchkd rad in.geod mpdaemon in.msd wsdnsd pdpd pepd in.asessiond vpnd fwd cpd cpridCPU 18: fw_1in.acapd fgd50 lpd cp_file_convertd rtmd usrchkd rad in.geod mpdaemon in.msd wsdnsd pdpd pepd in.asessiond vpnd fwd cpd cpridCPU 19: fw_0in.acapd fgd50 lpd cp_file_convertd rtmd usrchkd rad in.geod mpdaemon in.msd wsdnsd pdpd pepd in.asessiond vpnd fwd cpd cpridAll: scanengine_b sim affinity -lMgmt : 2eth1-01 : 1eth1-02 : 0eth1-03 : 1eth1-04 : 0eth3-01 : 2eth3-02 : 1eth3-03 : 2eth3-04 : 2 fwaccel stats -sAccelerated conns/Total conns : 5632/11726 (48%)Accelerated pkts/Total pkts : 87217254/91349341 (95%)F2Fed pkts/Total pkts : 4132087/91349341 (4%)F2V pkts/Total pkts : 349157/91349341 (0%)CPASXL pkts/Total pkts : 78606120/91349341 (86%)PSLXL pkts/Total pkts : 5653296/91349341 (6%)CPAS inline pkts/Total pkts : 0/91349341 (0%)PSL inline pkts/Total pkts : 0/91349341 (0%)QOS inbound pkts/Total pkts : 89478397/91349341 (97%)QOS outbound pkts/Total pkts : 90834289/91349341 (99%)Corrected pkts/Total pkts : 0/91349341 (0%) fw ctl multik statID | Active | CPU | Connections | Peak----------------------------------------------0 | Yes | 19 | 927 | 11041 | Yes | 18 | 805 | 9432 | Yes | 17 | 781 | 16063 | Yes | 16 | 814 | 12174 | Yes | 15 | 944 | 17225 | Yes | 14 | 895 | 11526 | Yes | 13 | 1102 | 16807 | Yes | 12 | 781 | 16748 | Yes | 11 | 1063 | 10639 | Yes | 10 | 741 | 102410 | Yes | 9 | 1002 | 105311 | Yes | 8 | 810 | 101612 | Yes | 7 | 799 | 96413 | Yes | 6 | 831 | 183714 | Yes | 5 | 833 | 101715 | Yes | 4 | 841 | 108716 | Yes | 3 | 862 | 1329 free -mtotal used free shared buffers cachedMem: 64282 12574 51708 0 105 3124-/+ buffers/cache: 9344 54938Swap: 32765 0 32765

CPAP-23800 and CPAC-2-40F-B

Hi,are there guidelines how to install two CPAC-2-40F-B modules into a CPAP-23800 appliance in order to achieve optimal performance (CPU/PCIe topology)? I had a look into the various guides, sk116742 and sk107516, but wasn't able to find any information about which cpu connects to which expansion slot. ByeMichael

OSPF Instances R80.20

Good day Mates I have recently read about the possibility of creating different OSPF instances in R80.20. This feature is really important for us as we have had issue with OSPF before, and we decided to use static routes instead.I would like to know if anyone has already implemented OSPF instances and if it is working as expected.Thanks in Advance

SSH Cipher, SSH Hmac Version

Anyone can provide me the step of SSH Server Cipher and Hmac Version to change.Thanks Win

R80.20 MTU and SecureXL Problem

Hello,we have a Ethernet-Link (no VPN from Checkpoint) to a network where the MTU is 1422. If we set the mtu on the interface and disable SecureXL the Clients (with default MTU of 1500) get the ICMP Fragmentation Packet and start to send packets with smaller MTU.When we reactivate SecureXL the Clients starts to send 1500 byte packets again and do not get an ICMP Fragmentation paket from the Firewall.We are using an Checkpoint 5600 Cluster with R80.20 with latest HFA.Did anybody had the same problem? Jan

What is the expected traffic in a packet capture for Checkpoint High Avalibility?

While working on a issue I noticed this on a wireshark packet capture on my Nexus 9000 switch is connected to a 15400 XL running Gaia 80.33 (whatever the current version is). There are two 15400 XL in one DC1 and 2 in DC2. The 4 are all clustered together for the VSS. The 192.168.xxx.xx is checkpoint's "internal switch" address. My question is should I be seeing these messages sent to the switchport that is connected to the firewall? The port that is connected to the firewall from the Nexus is for multicast traffic. I did a packet capture in our QA environment which is a mirror of our production with the exception of there are only 2 15400 XL and I don't see these messages below. Is this a mis- configuration of the Firewall High Availability being sent to the Nexus connecting port? 2019-07-10 15:34:26.154998 0.0.0.0 -> 192.168.xxx.xx CPHA CPHAv3223: FWHA_MY_STATE2019-07-10 15:34:26.155007 0.0.0.0 -> 0.0.0.0 CPHA CPHAv3223: FWHA_IFCONF_REQ2019-07-10 15:34:26.155010 0.0.0.0 -> 0.0.0.0 CPHA CPHAv3223: FWHA_IFCONF_REQ2019-07-10 15:34:26.155013 0.0.0.0 -> 0.0.0.0 CPHA CPHAv3223: FWHA_IFCONF_REQ

'Invalid segment retransmission. Packet dropped.'

Hi All, we have a client not able to connect to an FTP server. The connection goes through the internal firewall and then gets dropped by our external CP (80.10). The sync packet is okay, but then it is actually dropped by the same rule that should be allowing it with the 'Invalid segment retransmission. Packet dropped.' comment. Please see the below screen.We initially thought it was down to the application (FileZilla), but it seems it's the same, for example, from win command line. Thank you for any comments.

Not able to find the serial no.

Hi Every One.I have a UTM-1 570 and i want to get the Serial no of the device.But i am not able to do that.Please suggest the CLI command to get the serial no of the unit.thanks in advance.

Dropped Radius Packets with 80.20 Gateway

Hello,just upgraded a CPAP-13500 cluster to R80.20. Everything worked fine except that RADIUS packets (1812/udp) larger than about 1000 bytes getting dropped without log, not even with a drop debug message. Seems like they get dropped at the interface level, somewhere between interface and In-Chain.If I tcpdump on incoming and outgoing interface, I can see packets incoming to the gateway but none is leaving on the outgoing interface. If I'm using fw monitor, I even can't see any packets.Reverting to R77.30 solved the problem.Does anybody have a clue? Is this a known issue? ByeMichael