cancel
Showing results for 
Search instead for 
Did you mean: 
Create a Post
Maarten_Sjouw
Maarten_Sjouw inside Enterprise Appliances and Gaia OS 15 hours ago
views 1457 10

12600 with VSX low on memory

Ran into a problem with the upgrade of a 12600 for a customer that was asked to assist. The setup of this customer was pretty simple, 2 management servers on R80.20 in HA. 2 x 12600 with R80.10 running VSX. One piece of giveaway, one of the 12600's has 12GB memory the other (the backup) has 6GB memory Now the challenge was to upgrade to R80.20 to be able to use the dynamic objects for Office 365. So we start with the backup unit, there are 5 VS's and 55 virtual switches. When done with the upgrade which went well (cpuse upgrade) we reboot the box and let it do it's things to see where we are I check with vsx stat -v and get a list of 39 problems like this: Unable to open '/vs2/dev/fw0': Connection refused Unable to open '/vs4/dev/fw0': Connection refused Unable to open '/vs6/dev/fw0': Connection refused Unable to open '/vs7/dev/fw0': Connection refused Unable to open '/vs9/dev/fw0': Connection refused Unable to open '/vs12/dev/fw0': Connection refused Unable to open '/vs14/dev/fw0': Connection refused On the console there were messages about SIC problems, we ended up doing a reinstall of the box with a USB stick and a clean R80.20, then ran a vsx_util reconfigure (after the base interface config) however the number of errors remains the same. Opened a TAC case, but nobody could find the cause of the messages and errors. We decided to add more memory, so we sent 3 x 4GB onsite, but as the box has 2 physical CPU's it needs a even number of memory banks, so we put in 2 x 4GB to see if it would improve, it sure did, The number of problems went back to 20 with the added 2GB. One other thing that was bothering me was the 55 Virtual Switches. The engineer that helped this customer during the first setup told the customer to create a vSwitch for each VLAN they use... 🤔 All these switches ended up in 1 trunk port and terminated a VLAN, out of the 55 there were 19 vSwitches that had no connection to any VS, so I tried to delete 1 that was all ok in SmartConsole, this went ok and got removed from both boxes. I continued to remove all the ones that had no issues. After a reboot the box came back without any of the previous errors. Then I could remove the last couple of unused vSwitches. Then the local contact came back with 6 x 4GB DIMM's and put them all in, now the box is happily running with 24GB, why CP says it only supports 12 GB, I don't know. We will see tomorrow that we upgrade the other box from R80.10 to R80.20 and also put more memory in them.
MattDunn
MattDunn inside Enterprise Appliances and Gaia OS 16 hours ago
views 202 13

Network Card Issue

Hi all,I've got a network issue which isn't Check Point per se, but it's leaving one of my VSX cluster members down so figured I'd put it out there and see if anyone has any ideas...Everything was working perfectly, but after nearly 500 days uptime I did a routine reboot. The server never came back.Connecting via the local console and doing some testing with tcpdump I have concluded that the NIC is receiving traffic, but not sending traffic. I've proved this beyond doubt. So this is the problem.If I boot the server from a Knoppix live CD I can configure the interfaces and they all work perfectly. So the hardware is fine. Something has gone screwy with the GAiA TCP stack on the server, receiving but not sending.Does anyone know what I can do? Or is the best option to reinstall from the ISO? (I hope not - it's a little drastic!)Thanks,Matt

CPUSE will fail to install new Jumbo on restored gateway

Just run into a interesting scenario with CPUSE failing to install take 203 on very last gateway (nearly 40 updated without any issues). Won't be creating TAC case out of pure laziness and too much to do as is DA agent version is 1677, so all good there and gateway had take 154 installed before attempt to upgrade to 203. What turned out was that this particular box was recently fully re-built from factory image due to SSD failure (second SSD dying on 5900 appliances! not good trend there). So we went R77.30 > R80.10 > take 154 > backup restore. All went great and box was running like a charm. But now when I attempted to install take 203 it failed at very early stage with following error: Digging into more detailed logs I found that CPUSE was looking for an older file that was not there (/opt/CPInstLog/install_cpfc_wrapper_HOTFIX_R80_10_JUMBO_HF.log) So I compared the deployment agent backup directory contents on both cluster members. /opt/CPda/backup/ This was restored node and this was the secondary that was in it's "original" state Ok - bunch of archives missing.. Then it clicked - when we restored the box from backup, we did not install all jumbo HFs that were installed over time originally but went straight to the latest take 154 that was running on the node when backup was taken. So quick action was simply to copy all missing archives from "original" node /opt/CPda/backup to restored one and then take 203 installation succeeded. It might be a known issue, but there's a definitely room for improvement for CPUSE in case you use backup for restore instead of snapshot

R80.20 MTU and SecureXL Problem

Hello,we have a Ethernet-Link (no VPN from Checkpoint) to a network where the MTU is 1422. If we set the mtu on the interface and disable SecureXL the Clients (with default MTU of 1500) get the ICMP Fragmentation Packet and start to send packets with smaller MTU.When we reactivate SecureXL the Clients starts to send 1500 byte packets again and do not get an ICMP Fragmentation paket from the Firewall.We are using an Checkpoint 5600 Cluster with R80.20 with latest HFA.Did anybody had the same problem? Jan

Black screen during Gaia install

Hello, I am trying to install Gaia R80.30 on Dell OptiPlex 780 (4GB Ram, 500GB HDD, two Intel 100/1000 Nic)Boot from the DVD iso works and I get the following screen: “Welcome to Check Point Gaia R80.30”.I choose “Install Gaia on this system” and I get a black screen with the cursor on the top left, a min or two later the hard drive and the DVD stops working and the system hangs in this state. No errors or any messages.That computer works well with other operating systems, so I do not think it’s a hardware issue. Any idea what can be done in order to complete the Gaia install?

GAIA - Easy execute CLI commands on all gateways simultaneously

Now you can use the new command "gw_mbash" and "g_mclish" to execute bash or clish commands on all gateway simultaneously from the management server. All you have to do is copy and paste the above lines to the management server. After that you have two new commands on the management server. Here you can now centrally execute simple commands on all gateways which are connected via SIC with the management. Attention! You can quickly destroy your gateways if you enter the wrong commands! Command syntax: Command Description # gw_detect # gw_detect80 Detect all your gateways that support from this tool. This command only needs to be executed once or when gateways changed in topology.All founded gateways are stored as IP address in this file /var/log/g_gateway.txt. All added IP addresses will be used later to execute commands on these gateways. The file can also be edit manually to add gateway IP adressess. The execution of this command may take a few minutes. Use this command on R80.x gateways "gw_detect80" is a little bit faster. Use this command on R77.x gateways "gw_detect". # gw_mbash <command> Execute expert mode command on all gateway simultaneously # gw_mclish <command> Execute clish command on all gateway simultaneously An example! You want see the version of all gateway they are defined in the topology. Management# gw_detect -> start this command fist to detect all your supported gateways or "gw_detect80" on R80.x gatewaysManagement# gw_mclish show version os edition -> execute this command on all gateways Now the command "show version os edition" is executed on all gateways and the output is displayed on the management server sorted according to the ip addresses of the gateways in the firewall topologie. The same also works for the expert mode. For example: Management# gw_detect -> start this command fist to detect all your supported gateways or "gw_detect80" on R80.x gatewaysManagement# gw_mbash fw ver -> execute this command on all gateways Tip 1 Use this command to backup your clish configs from all gateways. Management# gw_mclish show configuration > backup_clish_all_gateways.txt This can also be start as simply cronjob😀. Tip 2 Check central performance settings for all gateways: Management# gw_mbash fw tab -t connections -s -> show state table for all gateways Management# gw_mbash fwaccel stat -> show fwaccel state's for all gatewaysManagement# gw_mbash ips stat -> check on witch gateway ips is enabled ... Cppy and paste this lines to the management server or download the script "new_multi_commands.sh" and execute the script. echo '#!/bin/bash' > /usr/local/bin/gw_mbash echo 'if [ ! -f /var/log/g_gateway.txt ]; then' >> /usr/local/bin/gw_mbash echo 'echo "First start \"gw_detect\" and\or edit the file \var\log\gw_gateway.txt manually. Add here all your gateway IP addresses."' >> /usr/local/bin/gw_mbash echo 'else' >> /usr/local/bin/gw_mbash echo 'HAtest="$@"' >> /usr/local/bin/gw_mbash echo 'echo $HAtest > /var/log/g_command.txt;' >> /usr/local/bin/gw_mbash echo 'while read line' >> /usr/local/bin/gw_mbash echo 'do' >> /usr/local/bin/gw_mbash echo 'if $CPDIR/bin/cprid_util getarch -server $line |grep "gaia" > /dev/null;' >> /usr/local/bin/gw_mbash echo 'then' >> /usr/local/bin/gw_mbash echo 'echo "--------- GAIA $line execute command: $HAtest"' >> /usr/local/bin/gw_mbash echo '$CPDIR/bin/cprid_util -server $line putfile -local_file /var/log/g_command.txt -remote_file /var/log/g_command.txt;' >> /usr/local/bin/gw_mbash echo '$CPDIR/bin/cprid_util -server $line -verbose rexec -rcmd /bin/bash -f /var/log/g_command.txt' >> /usr/local/bin/gw_mbash echo 'else' >> /usr/local/bin/gw_mbash echo 'echo "--------- STOP $line Error: no SIC to gateway or no compatible gateway"' >> /usr/local/bin/gw_mbash echo 'fi' >> /usr/local/bin/gw_mbash echo 'done < /var/log/g_gateway.txt' >> /usr/local/bin/gw_mbash echo 'fi' >> /usr/local/bin/gw_mbash chmod +x /usr/local/bin/gw_mbash echo '#!/bin/bash' > /usr/local/bin/gw_mclish echo 'if [ ! -f /var/log/g_gateway.txt ]; then' >> /usr/local/bin/gw_mclish echo 'echo "First start \"gw_detect\" and\or edit the file \var\log\gw_gateway.txt manually. Add here all your gateway IP addresses."' >> /usr/local/bin/gw_mclish echo 'else' >> /usr/local/bin/gw_mclish echo 'HAtest="$@"' >> /usr/local/bin/gw_mclish echo 'echo $HAtest > /var/log/g_command.txt;' >> /usr/local/bin/gw_mclish echo 'while read line' >> /usr/local/bin/gw_mclish echo 'do' >> /usr/local/bin/gw_mclish echo 'if $CPDIR/bin/cprid_util getarch -server $line |grep "gaia" > /dev/null;' >> /usr/local/bin/gw_mclish echo 'then' >> /usr/local/bin/gw_mclish echo 'echo "--------- GAIA $line execute command: $HAtest"' >> /usr/local/bin/gw_mclish echo '$CPDIR/bin/cprid_util -server $line putfile -local_file /var/log/g_command.txt -remote_file /var/log/g_command.txt;' >> /usr/local/bin/gw_mclish echo '$CPDIR/bin/cprid_util -server $line -verbose rexec -rcmd /bin/clish -f /var/log/g_command.txt' >> /usr/local/bin/gw_mclish echo 'else' >> /usr/local/bin/gw_mclish echo 'echo "--------- STOP $line Error: no SIC to gateway or no compatible gateway"' >> /usr/local/bin/gw_mclish echo 'fi' >> /usr/local/bin/gw_mclish echo 'done < /var/log/g_gateway.txt' >> /usr/local/bin/gw_mclish echo 'fi' >> /usr/local/bin/gw_mclish chmod +x /usr/local/bin/gw_mclish echo '#!/bin/bash' > /usr/local/bin/gw_detect echo 'echo -n > /var/log/g_gateway.txt' >> /usr/local/bin/gw_detect echo "more $FWDIR/conf/objects.C |grep -A 500 -B 1 ':type (gateway)'| sed -n '/gateway/,/:ipaddr (/p' | grep 'ipaddr (' | sed 's/^[ \t]*//' | sed 's/\:ipaddr (//' |sed 's/)//' > /var/log/g_gwl.txt" >> /usr/local/bin/gw_detect echo 'while read line' >> /usr/local/bin/gw_detect echo 'do' >> /usr/local/bin/gw_detect echo 'if $CPDIR/bin/cprid_util getarch -server $line |grep "gaia" > /dev/null;' >> /usr/local/bin/gw_detect echo 'then' >> /usr/local/bin/gw_detect echo 'echo "--------- GAIA $line "' >> /usr/local/bin/gw_detect echo 'echo "$line" >> /var/log/g_gateway.txt' >> /usr/local/bin/gw_detect echo 'else' >> /usr/local/bin/gw_detect echo 'echo "--------- STOP no SIC to gateway or no compatible gateway"' >> /usr/local/bin/gw_detect echo 'fi' >> /usr/local/bin/gw_detect echo 'done < /var/log/g_gwl.txt' >> /usr/local/bin/gw_detect chmod +x /usr/local/bin/gw_detect echo '#!/bin/bash' > /usr/local/bin/gw_detect80 echo 'echo -n > /var/log/g_gateway.txt' >> /usr/local/bin/gw_detect80 echo "mgmt_cli -r true show gateways-and-servers details-level full --format json | $CPDIR/jq/jq -r '.objects[] | select(.type | contains(\"Member\",\"simple-gateway\")) | .\"ipv4-address\"' |grep -v null|grep -v 0.0. > /var/log/g_gwl.txt" >> /usr/local/bin/gw_detect80 echo 'while read line' >> /usr/local/bin/gw_detect80 echo 'do' >> /usr/local/bin/gw_detect80 echo 'if $CPDIR/bin/cprid_util getarch -server $line |grep "gaia" > /dev/null;' >> /usr/local/bin/gw_detect80 echo 'then' >> /usr/local/bin/gw_detect80 echo 'echo "--------- GAIA $line "' >> /usr/local/bin/gw_detect80 echo 'echo "$line" >> /var/log/g_gateway.txt' >> /usr/local/bin/gw_detect80 echo 'else' >> /usr/local/bin/gw_detect80 echo 'echo "--------- STOP no SIC to gateway or no compatible gateway"' >> /usr/local/bin/gw_detect80 echo 'fi' >> /usr/local/bin/gw_detect80 echo 'done < /var/log/g_gwl.txt' >> /usr/local/bin/gw_detect80 chmod +x /usr/local/bin/gw_detect80 Versions:v0.1 - 04-14-2019 - gw_multi_commands_v0.1.sh -> betav0.2 - 04-16-2019 - gw_multi_commands_v0.2.sh -> remove bugsv0.3 - 04-17-2019 - gw_multi_commands_v0.3.sh -> split to two commands (gw_detect and the old commands)v0.4 - 05-05-2019 - gw_multi_commands_v0.4.sh -> add command "gw_detect80" Video tutorial: LITHIUM.OoyalaPlayer.addVideo('https:\/\/player.ooyala.com\/static\/v4\/production\/', 'lia-vid-9wdnRtaDE62K43G6H0BgrmwVXzp0YJzvw822h520r890', '9wdnRtaDE62K43G6H0BgrmwVXzp0YJzv', {"pcode":"kxN24yOtRYkiJthl3FdL1eXcRmh_","playerBrandingId":"ODI0MmQ3NjNhYWVjODliZTgzY2ZkMDdi","width":"822px","height":"520px"});(view in My Videos) Copyright by Heiko Ankenbrand 1996-2019

export the 4400 checkpoint configuration

Good morning.Is there a way to export the 4400 checkpoint configuration?I want to make a backup of security.thanks to everyone

No of Core mismatched with number of CPU

Hi,We have cluster R77.30 hosted in open server with 4 cores assigned for the compute. However, in one gateway we are seeing one CPU is functional in top. fw ctl get int fwlic_num_of_allowed_cores out shows four core is allowed.Then what could be the issue?GW#fw ctl get int fwlic_num_of_allowed_coresfwlic_num_of_allowed_cores = 4 GW# fw ctl affinity -l -rCPU 0:All: eth2 eth3 eth4 eth10 eth11 fw_0 fw_1 fw_2 in.geod mpdaemon fwd cprid cpd RegardsDipayan Nayak

Message seen on /var/log/messages - "simi_reorder_enqueue_packet"

Hi there guys, I'm seeing this message "simi_reorder_enqueue_packet" on /var/log/messages. Is this an indication traffic congestion? My network is momentarily encountering intermittent application connectivity especially on VOIP. As usual, no drops are seen on tracker and zebug. Hope someone had encountered this.

R80.x Performance Tuning Tip – Multi Queue

What is Multi Queue? It is an acceleration feature that lets you assign more than one packet queue and CPU to an interface. When most of the traffic is accelerated by the SecureXL, the CPU load from the CoreXL SND instances can be very high, while the CPU load from the CoreXL FW instances can be very low. This is an inefficient utilization of CPU capacity. By default, the number of CPU cores allocated to CoreXL SND instances is limited by the number of network interfaces that handle the traffic. Because each interface has one traffic queue, only one CPU core can handle each traffic queue at a time. This means that each CoreXL SND instance can use only one CPU core at a time for each network interface. Check Point Multi-Queue lets you configure more than one traffic queue for each network interface. For each interface, you can use more than one CPU core (that runs CoreXL SND) for traffic acceleration. This balances the load efficiently between the CPU cores that run the CoreXL SND instances and the CPU cores that run CoreXL FW instances. Important - Multi-Queue applies only if SecureXL is enabled. Chapter Architecture:R80.x Security Gateway Architecture (Logical Packet Flow)R80.x Security Gateway Architecture (Content Inspection) R80.x Security Gateway Architecture (Acceleration Card Offloading) R80.x Ports Used for Communication by Various Check Point Modules Performance Tuning:R80.x Performance Tuning Tip - AES-NI R80.x Performance Tuning Tip - SMT (Hyper Threading) R80.x Performance Tuning Tip - Multi Queue R80.x Performance Tuning Tip - Connection Table R80.x Performance Tuning Tip - fw monitorR80.x Performance Tuning Tip - TCPDUMP vs. CPPCAP R80.x Performance Tuning Tip – DDoS „fw sam“ vs. „fwaccel dos“ Cheat Sheet:R80.x cheat sheet - fw monitor R80.x cheat sheet - ClusterXL More interesting articles:Article list (Heiko Ankenbrand) Multi-Queue Requirements and Limitations Tip 1 Multi-Queue is not supported on computers with one CPU core. Network interfaces must use the driver that supports Multi-Queue. Only network cards that use the igb (1Gb), ixgbe (10Gb), i40e (40Gb), or mlx5_core (40Gb) drivers support the Multi-Queue. You can configure a maximum of five interfaces with Multi-Queue. You must reboot the Security Gateway after all changes in the Multi-Queue configuration. For best performance, it is not recommended to assign both SND and a CoreXL FW instance to the same CPU core. Do not change the IRQ affinity of queues manually. Changing the IRQ affinity of the queues manually can adversely affect performance. Multi-Queue is relevant only if SecureXL and CoreXL is enabled. Do not change the IRQ affinity of queues manually. Changing the IRQ affinity of the queues manually can adversely affect performance. You cannot use the “sim affinity” or the “fw ctl affinity” commands to change and query the IRQ affinity of the Multi-Queue interfaces. The number of queues is limited by the number of CPU cores and the type of interface driver: Network card driver Speed Maximal number of RX queues igb 1 Gb 4 ixgbe 10 Gb 16 i40e 40 Gb 14 mlx5_core 40 Gb 10 The maximum RX queues limit dictates the largest number of SND/IRQ instances that can empty packet buffers for an individual interface using that driver that has Multi-Queue enabled. Multi-Queue does not work on 3200 / 5000 / 15000 / 23000 appliances in the following scenario (sk114625) MQ is enabled for on-board interfaces (e.g., Mgmt, Sync) the number of active RX queues was set to either 3 or 4 (with cpmq set rx_num igb <number> command) This problem was fixed in: Check Point R80.10 Jumbo Hotfix Accumulator for R77.30 - since Take_198 The number of traffic queues is limited by the number of CPU cores and the type of network interface card driver.The on-board interfaces on these appliances use the igb driver, which supports up to 4 RX queues.However, the I211 controller on these on-board interfaces supports only up to 2 RX queues. When Multi-Queue will not help Tip 2 When most of the processing is done in CoreXL - either in the Medium path, or in the Firewall path (Slow path). All current CoreXL FW instances are highly loaded, so there are no CPU cores that can be reassigned to SecureXL. When IPS, or other deep inspection Software Blades are heavily used. When all network interface cards are processing the same amount of traffic. When all CPU cores that are currently used by SecureXL are congested. When trying to increase traffic session rate. When there is not enough diversity of traffic flows. In the extreme case of a single flow, for example, traffic will be handled only by a single CPU core. (Clarification: The more traffic is passing to/from different ports/IP addresses, the more you benefit from Multi-Queue. If there is a single traffic flow from a single Client to a single Server, then Multi-Queue will not help.) Multi-Queue is recommended Load on CPU cores that run as SND is high (idle < 20%). Load on CPU cores that run CoreXL FW instances is low (idle > 50%). There are no CPU cores left to be assigned to the SND by changing interface affinity. Multi-Queue support on Appliance vs. Open Server Gateway type Network interfaces that support the Multi-Queue Check Point Appliance MQ is supported on all applications that use the following drivers igb, ixgbe, i40e, mlx5_core. These expansion line cards for 4000, 12000, and 21000 appliances support the Multi-Queue: CPAC-ACC-4-1C CPAC-ACC-4-1F CPAC-ACC-8-1C CPAC-ACC-2-10F CPAC-ACC-4-10F This expansion line card for 5000, 13000, and 23000 appliances supports the Multi-Queue: · CPAC-2-40F-B Open Server Network cards that use igb (1Gb), ixgbe (10Gb), i40e (40Gb), or mlx5_core (40Gb) drivers support the Multi-Queue. Multi-Queue support on Open Server (Intel Network Cards) Tip 3 The following list shows an overview with all Intel cards from Check Point HCL for open server from 11/21/2018. The list is cross-referenced to the Intel drivers. I do not assume any liability for the correctness of the information. These lists should only be used to help you find the right drivers. It is not an official document of Check Point! So please always read the official documents of Check Point. Intel network card Ports Chipset PCI ID Driver PCI Speed MQ 10 Gigabit AT 1 82598EB 8086:25e7 ixgbe PCI-E 10G Copper yes 10 Gigabit CX4 2 82598EB 8086:10ec ixgbe PCI-E 10G Copper yes 10 Gigabit XF family (Dual and Single Port models, SR and LR) 2 82598 8086:10c6 Ixgbe PCI-E 10G Fiber yes Ethernet Converged Network Adapter X540-T2 2 X540 8086:1528 ixgbe PCI-E 100/1G/10GCopper yes Ethernet Server Adapter I340-T2 2 82580 - Igb PCI-E 10/100/1GCopper yes Ethernet Server Adapter I340-T4 2 82580 - Igb PCI-E 10/100/1G Copper yes Ethernet Server Adapter X520 X520-SR2, X520-SR1, X520-LR1, X520-DA2 2 X520 - ixgbe PCI-E 10G Fiber yes Gigabit VT Quad Port Server Adapter 4 82575GB 8086:10d6 igb PCI-E 10/100/1G Copper yes Intel Gigabit ET2 Quad Port Server Adapter 4 - igb PCI-E 1G Copper yes PRO/10GbE CX4 1 82597EX 8086:109e Ixgb PCI-X 10G Copper no PRO/10GbE LR 1 82597EX 8086:1b48 Ixgb PCI-X 10G Fiber no PRO/10GbE SR 1 82597EX 8086:1a48 Ixgb PCI-X 10G Fiber no PRO/1000 Dual 82546GB 2 82546GB 8086:108a E1000 PCI-E 10/100/1G Copper no Pro/1000 EF Dual 2 82576 8086:10e6 Igb ? PCI-E 1G Fiber yes ? Pro/1000 ET Dual port Server Adapter 2 82576 igb PCI-E 1G Copper yes PRO/1000 ET Quad Port Server Adapter 4 82576 8086:10e8 Igb PCI-E 10/100/1G Copper yes PRO/1000 GT Quad 4 82546 8086:10b5 E1000 PCI-X 10/100/1G Copper no PRO/1000 MF 1 82546 ? 82545 ? - E1000 PCI-X 1G Fiber no PRO/1000 MF (LX) 1 82546 ? 82545 ? - E1000 PCI-X 1G Fiber no PRO/1000 MF Dual 2 82546 ? 82545 ? - E1000 PCI-X 1G Fiber no PRO/1000 MF Quad 4 82546 ? 82545 ? - E1000 PCI-X 1G Fiber no PRO/1000 PF 1 82571 ? 8086:107e E1000 PCI-E 1G Fiber no PRO/1000 PF Dual 2 82571 ? 8086:115f E1000 PCI-E 1G Fiber no PRO/1000 PF Quad Port Server Adapter 4 82571 ? 8086:10a5 E1000 PCI-E 1G Fiber no PRO/1000 PT 1 82571 8086:1082 E1000 PCI-E 10/100/1G Copper no PRO/1000 PT Dual 2 82571 8086:105e E1000 PCI-E 10/100/1G Copper no PRO/1000 PT Dual UTP 2 82571 8086:108a E1000 PCI-E 10/100/1G Copper no PRO/1000 PT Quad 4 82571 8086:10a4 E1000 PCI-E 10/100/1G Copper no PRO/1000 PT Quad Low Profile 4 82571 8086:10bc E1000 PCI-E 10/100/1G Copper no PRO/1000 XF 1 82544 E1000 PCI-X 1G Fiber no For all "?" I could not clarify the points exactly. Multi-Queue support on Open Server (HP and IBM Network Cards) Tip 4 The following list shows an overview with all HP cards from Check Point HCL for open server from 11/22/2018. The list is cross-referenced to the Intel drivers. I do not assume any liability for the correctness of the information. These lists should only be used to help you find the right drivers. It is not an official document of Check Point! So please always read the official documents of Check Point. HP network card Ports Chipset PCI ID Driver PCI Speed MQ Ethernet 1Gb 4-port 331T 4 BCM5719 14e4:1657 tg3 PCI-E 1G Copper no Ethernet 1Gb 4-port 366FLR 4 Intel I350 8086:1521 igb PCI-E 1G Copper yes Ethernet 1Gb 4-port 366T 4 Intel I350 8086:1521 igb PCI-E 1G Copper yes Ethernet 10Gb 2-port 560SFP+ 2 Intel 82599EB 0200: 8086:10fb ixgbe PCI-E 10G Fiber yes Ethernet 10Gb 2-port 561FLR-T 2 Intel X540-AT2 8086:1528 ixgbe PCI-E 10G Copper yes HPE Ethernet 10Gb 2-port 562FLR-SFP+ 2 Intel X710 8086:1572 i40e PCI-E 10G Copper yes Ethernet 10Gb 2-port 561T 2 Intel X540-AT2 8086:1528 ixgbe PCI-E 10G Copper yes NC110T 1 Intel 82572GI 8086:10b9 E1000 PCI-E 10/100/1G Copper no NC320T 1 BCM5721 KFB 14e4:1659 tg3 PCI-E 10/100/1G Copper no NC325m Quad Port 4 BCM5715S 14e4:1679 tg3 PCI-E 1G Copper no NC326m PCI Express Dual Port 1Gb Server Adapter for c-Class Blade System 2 BCM5715S tg3 PCI-E 1G Copper no NC340T 4 Intel 82546GB 8086:10b5 E1000 PCI-X 10/100/1G Copper no NC360T 2 Intel 82571EB 8086:105e E1000 PCI-E 10/100/1G Copper no NC364T Official site 4 Intel 82571EB 8086:10bc E1000 PCI-E 10/100/1G Copper no NC365T PCI Express Quad Port 4 Intel82580 8086:150e igb PCI-E 10/100/1G Copper yes NC373F 1 Broadcom 5708 14e4:16ac bnx2 PCI-E 1G Copper no NC373m Dual Port 2 BCM5708S 14e4:16ac bnx2 PCI-E 10/100/1G Copper no NC373T 1 Broadcom 5708 14e4:16ac bnx2 PCI-E 10/100/1G Copper no NC380T PCI Express Dual Port Multifunction Gigabit server 2 BCM5706 - bnx2 PCI-E 10/100/1G Copper no NC522SFP Dual Port 10GbE Server Adapter 2 NX3031 4040:0100 ??? PCI-E 10G Fiber no NC550SFP Dual Port 10GbE Server Adapter Official site 2 Emulex OneConn 19a2:0700 be2net PCI-E 10G Fiber no NC552SFP 10GbE 2-port Ethernet Server 2 Emulex OneConn 19a2:0710 be2net PCI-E 10G Fiber no NC7170 2 Intel 82546EB 8086:1010 E1000 PCI-X 10/100/1G Copper no For all "?" I could not clarify the points exactly. IBM network card Ports Chipset PCI ID Driver PCI Speed MQ Broadcom 10Gb 4-Port Ethernet Expansion Card (CFFh) for IBM BladeCenter 4 BCM57710 bnx2x PCI-E 10G Fiber no Broadcom NetXtreme Quad Port GbE network Adapter 4 I350 igb PCI-E 1G Copper yes NetXuleme 1000T 1 ??? (1) ??? PCI-X 10/100/1G Copper ??? NetXuleme 1000T Dual 2 ??? (1) ??? PCI-X 10/100/1G Copper ??? PRO/1000 PT Dual Port Server Adapter 2 82571GB E1000 PCI-E 10/100/1G Copper no (1) These network cards can't even be found at Goggle. Notes to Intel igb and ixgbe driver I used the LKDDb Database to identify the drivers. LKDDb is an attempt to build a comprensive database of hardware and protocols know by Linux kernels. The driver database includes numeric identifiers of hardware, the kernel configuration menu needed to build the driver and the driver filename. The database is build automagically from kernel sources, so it is very easy to have always the database updated. This was the basis of the cross-reverence between Check Point HCL and Intel drivers. Link to LKDDb web database:https://cateee.net/lkddb/web-lkddb/ Link to LKDDb database driver: igb, ixgbe, i40e, mlx5_core Here you can find the following output for all drivers e.g. igb: Numeric ID (from LKDDb) and names (from pci.ids) of recognized devices: vendor: 8086 ("Intel Corporation"), device: 0438 ("DH8900CC Series Gigabit Network Connection") vendor: 8086 ("Intel Corporation"), device: 10a9 ("82575EB Gigabit Backplane Connection") vendor: 8086 ("Intel Corporation"), device: 10c9 ("82576 Gigabit Network Connection") vendor: 8086 ("Intel Corporation"), device: 10d6 ("82575GB Gigabit Network Connection") and many more... How to recognize the driver With the ethtool you can display the version and type of the driver. For example for the interface eth0. # ethtool -i eth0 driver: igbversion: 2.1.0-k2firmware-version: 3.2-9bus-info: 0000:02:00.0 Active RX multi queues - formula By default, Security Gateway calculates the number of active RX queues based on this formula: RX queues = [Total Number of CPU cores] - [Number of CoreXL FW instances] Configure Here I would refer to the following links: Performance Tuning R80.10 Administratio GuidePerformance Tuning R80.20 Administration Guide References Best Practices - Security Gateway Performance Multi-Queue does not work on 3200 / 5000 / 15000 / 23000 appliances when it is enabled for on-board interfacesPerformance Tuning R80.10 Administratio GuidePerformance Tuning R80.20 Administration Guide Intel:Download Intel® Network Adapter Virtual Function Driver for Intel® 10 Gigabit Ethernet Network Connections Download Network Adapter Driver for Gigabit PCI Based Network Connections for Linux* Download Intel® Network Adapter Driver for 82575/6, 82580, I350, and I210/211-Based Gigabit Network Connections for Linu… LKDDb (Linux Kernel Driver Database):https://cateee.net/lkddb/web-lkddb/ Copyright by Heiko Ankenbrand 1994-2019

Comparing 15000 series appliances against 6000 series

Hello!Check Point released a new appliance line of 6000 series and here comes the new challenge. For a customer who wants NGTP functionality and in the scenario where based on sizing 15600 is a perfect match for them, should we go for it or it is even better to go with 6800 model? You see NGTP performance of 6800 is far better by datasheet and price is much lower too.Enterprise Testing Conditions:6800 Security Gateway- 8.9 Gbps of Threat Prevention15600 Security Gateway- 7.4 Gbps of Threat Prevention2Both numbers are provided with R80.20 Your opinions?BRVato

BGP ISP advertise /24

Hello, I am considering replacing my cisco routers with a checkpoint to advertise my /24 to each of my ISP providers. Has anyone every done this? If so are there any caveats or issues that you have found? Thanks, Aaron

Backup/Resore for RMA

What's the best way to do this? We need the replacement to be on the like version. Is there a flag/switch to do a full backup with OS/hotfixes/etc so the restore give a like-same/duplicate device swap out?RMA (new device) = Production (old device)

R80.30 stability status

Dear all, I have 15600 Security Gateway appliance and smart-1 225 Management appliance that is currently running on R80.10 Take 189 is install . I am facing VPN unsuitability issue with current version maximum site to site VPN is with AWS.Is R80.30 is stable version?? Is this version resole VPN issue??
HeikoAnkenbrand
HeikoAnkenbrand inside Enterprise Appliances and Gaia OS a week ago
views 266488 207 318

R80.x Ports Used for Communication by Various Check Point Modules

Introduction This drawing should give you an overview of the used R80 and R77 ports respectively communication flows. It should give you an overview of how different Check Point modules communicate with each other. Furthermore, services that are used for firewall operation are also considered. These firewall services are also partially mapped as implied rules in the set on the firewall. Overview Chapter Architecture:R80.x Security Gateway Architecture (Logical Packet Flow)R80.x Security Gateway Architecture (Content Inspection) R80.x Security Gateway Architecture (Acceleration Card Offloading) R80.x Ports Used for Communication by Various Check Point Modules Performance Tuning:R80.x Performance Tuning Tip - AES-NI R80.x Performance Tuning Tip - SMT (Hyper Threading) R80.x Performance Tuning Tip - Multi Queue R80.x Performance Tuning Tip - Connection Table R80.x Performance Tuning Tip - fw monitorR80.x Performance Tuning Tip - TCPDUMP vs. CPPCAP R80.x Performance Tuning Tip – DDoS „fw sam“ vs. „fwaccel dos“ Cheat Sheet:R80.x cheat sheet - fw monitor R80.x cheat sheet - ClusterXL More interesting articles:Article list (Heiko Ankenbrand) References Support Center: Ports used by Check Point software Versions + v1.4a bug fix, update port 1701 udp L2TP 09.04.2018+ v1.4b bug fix 15.04.2018+ v1.4c CPUSE update 17.04.2018+ v1.4d legend fixed 17.04.2018+ v1.4e add SmartLog and SmartView on port 443 20.04.2018+ v1.4f bug fix 21.05.2018+ v1.4g bug fix 25.05.2018+ v1.4h add Backup ports 21, 22, 69 UDP and ClusterXL full sync port 256 30.05.2018+ v1.4i add port 259 udp VPN link probeing 12.06.2018+ v1.4j bug fix 17.06.2018+ v1.4k add OSPF/BGP route Sync 25.06.2018+ v1.4l bug fix routed 29.06.2018+ v1.4m bug fix tcp/udp ports 03.07.2018+ v1.4n add port 256 13.07.2018+ v1.4o bug fix / add TE ports 27.11.2018+ v1.4p bug fix routed port 2010 23.01.2019+ v1.4q change to new forum format 16.03.2019 old version 1.3:+ v1.3a new designe (blue, gray), bug fix, add netflow, new names 27.03.2018+ v1.3b add routing ports, bug fix designe 28.03.2018+ v1.3c bug fix, rename ports (old) 29.03.2018+ v1.3d bug fix 30.03.2018+ v1.3e fix issue L2TP UDP port 1701 old version 1.1:+ v1.1a - added r80.xx ports 16.03.2018+ v1.1b - bug in drawing fixed 17.03.2018+ v1.1c - add RSA, TACACS, Radius 19.03.2018+ v1.1d - add 900, 259 Client-auth - deleted od 4.0 ports 20.03.2018+ v1.1e - add OPSEC -delete R55 ports 21.03.2018+ v1.1f - bug fix 22.03.2018+ v1.1g - bug fix - add mail smtp -add dhcp - add snmp 25.03.2018 Copyright by Heiko Ankenbrand 1994-2019