cancel
Showing results for 
Search instead for 
Did you mean: 
Create a Post

Downloading CPUSE updates outside of Check Point Cloud

Hello all, Not all companies are allowed to have internet access for their managements and gateways. With internet access, installing the latest Jumbo or even upgrade to Major releases is just one command. What just comes to my mind is the idea that CPUSE can be used in case there is no internet access, but you can choose whether use internet or some internal IP address where will be stored all needed packages.Something like 2 new CPUSE commands:1. set installer source internet2. set installer source local <IP_ADDRESS> In case admin would like to use a local repository, CPUSE will connect to the server over https and download packages from there. I am fully aware that a similar idea is already created by Central Deployment Tool (CDT) or using SmartUpdate.What is even better is to use dedicated API and later UI like was mentioned by @Dorit_Dor.

R80.20 MTU and SecureXL Problem

Hello,we have a Ethernet-Link (no VPN from Checkpoint) to a network where the MTU is 1422. If we set the mtu on the interface and disable SecureXL the Clients (with default MTU of 1500) get the ICMP Fragmentation Packet and start to send packets with smaller MTU.When we reactivate SecureXL the Clients starts to send 1500 byte packets again and do not get an ICMP Fragmentation paket from the Firewall.We are using an Checkpoint 5600 Cluster with R80.20 with latest HFA.Did anybody had the same problem? Jan

What is the equivalent of cphaprob show_bond for a Standalone Gateway (ClusterXL not running)?

The cphaprob show_bond commands in expert for gateways running ClusterXL is very handy, but this doesn't work for bond interfaces running on a standalone gateway. Is there some other command that would show me similar information for troubleshooting bonded interfaces on a standalone gateway?

Please tell me how to disable 'activate_sw_raid" command

In the past, I entered "activate_sw_raid" command to do testing HDD mirroring.After that, I removed secondary HDD because finished testing HDD mirroring.There is no plan to do HDD mirroring, in the future.So, It is a problem /var/log/messages filled with following messages;-----------------------------Aug 15 11:28:45 2019 12200App cpd: Raid: Failed at getting the rev number for Disk 1Aug 15 11:28:45 2019 12200App cpd: Problems with getting output from pipeAug 15 11:28:45 2019 12200App cpd: Raid: Failed at getting the LBA for Disk 1Aug 15 11:28:51 2019 12200App kernel: [fw4_1];fw_send_kmsg: No buffer for tsid 15Aug 15 11:28:58 2019 12200App ntpd[8125]: kernel time sync enabled 0001Aug 15 11:29:06 2019 12200App kernel: [fw4_1];fw_send_kmsg: No buffer for tsid 15Aug 15 11:29:36 2019 12200App last message repeated 2 timesAug 15 11:29:45 2019 12200App cpd: Problems with getting output from pipeAug 15 11:29:45 2019 12200App cpd: Raid: Failed at getting the vendor name for Disk 1Aug 15 11:29:45 2019 12200App cpd: Problems with getting output from pipeAug 15 11:29:45 2019 12200App cpd: Raid: Failed at getting the product ID for Disk 1Aug 15 11:29:45 2019 12200App cpd: Problems with getting output from pipeAug 15 11:29:45 2019 12200App cpd: Raid: Failed at getting the rev number for Disk 1Aug 15 11:29:45 2019 12200App cpd: Problems with getting output from pipeAug 15 11:29:45 2019 12200App cpd: Raid: Failed at getting the LBA for Disk 1-----------------------------I do not want to output these messages.Please tell me solution.

Proxy ARP after upgrade to R80.30

This week we had some clusters upgraded from R80.10 to R80.30, the customer wants the new and improved HTTPS functionality. When we were done, on 2 VRRP clusters we had some automatic NAT and a special Hide NAT (for WiFi guests) After upgrading you install the policy twice, first the acces and then again for the Threat Prevention policy. After some time we were told the Guest WiFi did not work, investigation pointed in the end to the proxy ARP that was not active, so we added the Proxy ARP command for the Hide address, pushed the access policy (the third time). After looking with fw ctl arp we then saw 2 Proxy ARP addresses, the one we added and the other was a automatic NAT. After removing the manual Proxy ARP again, the fw ctl arp kept showing both ARP entries. When we upgraded the other cluster we checked again after 1, 2 and 3 pushes of the access policy and only after the third push the Proxy ARP addresses showed up. It has been reported and R&D will be informed.

Lost access to gaia portal

Hi guys, running R77.30, not long ago we lost the ability to web to our gateway and manager, it used to work (self signed cert) but now the browser throws an error such as: "Can’t connect securely to this page" with no option to continue anyway.Have tried 3 different browsers, and enabled all tls versions and even sslv3 but nothing helps. Wireshark capture shows a client hello requesting, tlsv1.2 then tls v1.0, sslv3.0 then it stops. Anyone got any solution for this? I would be happy just running plain http but it seems not an option.config:set web table-refresh-rate 15set web session-timeout 10set web ssl-port 443set web ssl3-enabled onset web daemon-enable onthanks!
HeikoAnkenbrand
HeikoAnkenbrand inside Enterprise Appliances and Gaia OS Thursday
views 291800 211 320

R80.x Ports Used for Communication by Various Check Point Modules

Introduction This drawing should give you an overview of the used R80 and R77 ports respectively communication flows. It should give you an overview of how different Check Point modules communicate with each other. Furthermore, services that are used for firewall operation are also considered. These firewall services are also partially mapped as implied rules in the set on the firewall. Overview Chapter Architecture:R80.x Security Gateway Architecture (Logical Packet Flow)R80.x Security Gateway Architecture (Content Inspection) R80.x Security Gateway Architecture (Acceleration Card Offloading) R80.x Ports Used for Communication by Various Check Point Modules Performance Tuning:R80.x Performance Tuning Tip - AES-NI R80.x Performance Tuning Tip - SMT (Hyper Threading) R80.x Performance Tuning Tip - Multi Queue R80.x Performance Tuning Tip - Connection Table R80.x Performance Tuning Tip - fw monitorR80.x Performance Tuning Tip - TCPDUMP vs. CPPCAP R80.x Performance Tuning Tip – DDoS „fw sam“ vs. „fwaccel dos“ Cheat Sheet:R80.x cheat sheet - fw monitor R80.x cheat sheet - ClusterXL More interesting articles:Article list (Heiko Ankenbrand) References Support Center: Ports used by Check Point software Versions + v1.4a bug fix, update port 1701 udp L2TP 09.04.2018+ v1.4b bug fix 15.04.2018+ v1.4c CPUSE update 17.04.2018+ v1.4d legend fixed 17.04.2018+ v1.4e add SmartLog and SmartView on port 443 20.04.2018+ v1.4f bug fix 21.05.2018+ v1.4g bug fix 25.05.2018+ v1.4h add Backup ports 21, 22, 69 UDP and ClusterXL full sync port 256 30.05.2018+ v1.4i add port 259 udp VPN link probeing 12.06.2018+ v1.4j bug fix 17.06.2018+ v1.4k add OSPF/BGP route Sync 25.06.2018+ v1.4l bug fix routed 29.06.2018+ v1.4m bug fix tcp/udp ports 03.07.2018+ v1.4n add port 256 13.07.2018+ v1.4o bug fix / add TE ports 27.11.2018+ v1.4p bug fix routed port 2010 23.01.2019+ v1.4q change to new forum format 16.03.2019 old version 1.3:+ v1.3a new designe (blue, gray), bug fix, add netflow, new names 27.03.2018+ v1.3b add routing ports, bug fix designe 28.03.2018+ v1.3c bug fix, rename ports (old) 29.03.2018+ v1.3d bug fix 30.03.2018+ v1.3e fix issue L2TP UDP port 1701 old version 1.1:+ v1.1a - added r80.xx ports 16.03.2018+ v1.1b - bug in drawing fixed 17.03.2018+ v1.1c - add RSA, TACACS, Radius 19.03.2018+ v1.1d - add 900, 259 Client-auth - deleted od 4.0 ports 20.03.2018+ v1.1e - add OPSEC -delete R55 ports 21.03.2018+ v1.1f - bug fix 22.03.2018+ v1.1g - bug fix - add mail smtp -add dhcp - add snmp 25.03.2018 Copyright by Heiko Ankenbrand 1994-2019
Danny
Danny inside Enterprise Appliances and Gaia OS Wednesday
views 67276 36 26

One-liner for Address Spoofing Troubleshooting

One-liner (Bash) to show a summary about each gateway interfaces' calculated topology and address spoofing setting.In expert mode run: if [[ `$CPDIR/bin/cpprod_util FwIsFirewallModule 2>/dev/null` != *'1'* ]]; then echo; tput bold; echo ' Not a firewall gateway!'; tput sgr0; echo; else echo; tput bold; echo -n ' Interface Topology '; tput sgr0; echo -n '> '; tput bold; tput setaf 1; if [[ -n "$vsname" ]] && [[ $vsname != *'unavail'* ]]; then echo $vsname' (ID: '$INSTANCE_VSID')'; else hostname; fi; tput sgr0; echo -n ' '; printf '%.s-' {1..80}; echo; egrep -B1 $'ifindex|:ipaddr|\(\x22<[0-9]|objtype|has_addr_info|:monitor_only|:external' $FWDIR/state/local/FW1/local.set | sed -n "/$(if [[ -n "$vsname" ]] && [[ $vsname != *'unavail'* ]] && [[ $INSTANCE_VSID != '0' ]]; then echo $vsname; else grep `hostname` /etc/hosts | cut -f1 -d' '; fi)*$/,\$ p" | tail -n +3 | sed 's/[\x22\t()<>]//g' | sed 's/--//g' | sed '$!N;s/\n:ipaddr6/ IPv6/;P;D' | sed '/IPv6/!s/://g' | sed 's/interface_topology/\tCalculated Interface Topology/g' | sed '0,/ifindex 0/{/ifindex 0/d;}' | sed '/ifindex 0/q' | sed '/spoof\|scan/d' | sed 's/has_addr_info true/\tAddress Spoofing Protection: Enabled/g' | sed 's/has_addr_info false/\tAddress Spoofing Protection: Disabled/g' | sed -e '/Prot/{n;d}' | sed '$!N;s/\nmonitor_only true/ (Detect Mode)/;P;D' | sed '$!N;s/\nmonitor_only false/ (Prevent Mode)/;P;D' | sed '$!N;s/\nexternal false/ - Internal Interface/;P;D' | sed '$!N;s/\nexternal true/ - External Interface/;P;D' | sed '/objtype/q' | tac | sed '/ifindex 0/I,+2 d' | sed '/Address/,$!d' | tac | sed '/ifindex/d' | sed 's/,/ -/g' | sed '$!N;s/\nipaddr/ >/;P;D' | sed '/ - /s/^ /\t/' | egrep -C 9999 --color=auto $'>|IPv6|External|Disabled|Detect'; echo; fi The One-liner is IPv4 and IPv6 compatible, works on clustered and single gateway environments also within VSX, shows all interface types configured in your firewall object within SmartDashboad, colors specific words of the output for easier identification of important settings, adds additional information regarding Address Spoofing setting and mode as well as the topology type of each interface and is of course completely integrated within our ccc script. Thanks to Tim Hall's preliminary work in this thread.Thanks to Norbert Bohusch for IPv6 support and testing.Thanks to Kaspars Zibarts & Bob Zimmerman for VSX support and testing.Thanks to Anthony Joubaire for support and testing multiple installation targets. -- More one-liners -- One-liner to show VPN topology on gatewaysOne-liner to show Geo Policy on gatewaysFW Monitor SuperTool

checkpoint r80.20 gaia os dhcp server option 150

Hi,i want to configure DHCP server on my GAIA os with dhcp server option 150 - tftp for ip phones.i saw sk92473 but it says we can only use the options showed in https://linux.die.net/man/5/dhcpd.confand option 150 isn't there.someone knows if this is supported and how to configure it? thanks

Gaia HealthCheck Script v6.16 released

Check Point released v6.16 of it's Gaia HealthCheck Script. Script author: @Nathan_Davieau (LinkedIn profile) What's new: Added additional descriptions for known issues in /var/log/messages What's missing: Script self-update Download Package Link Date healthcheck.sh script v6.16 13Aug2019
Mardoqueo
Mardoqueo inside Enterprise Appliances and Gaia OS a week ago
views 79 4 1

PBR and Hide NAT

Good day. I have two links and I have PBR´S configured Link 1 eth1 187.150.0.10 Link 2 eth2 203.0.13.53 My default Gateway is: 187.150.0.29 Table 1 X Gateway Provider: 187.150.0.29 Table 2 Y Gateway Provider: 203.0.13.54 And I add a policy source: 192.168.10.10 action: Table 2: Y In smartDashboard I add the host and do a hide behide NAT to ip 203.0.13.53, this works perfect. But when I do a tracert from Windows to 8.8.8.8 the route tells me that I am leaving for 187.150.0.29 and it is assumed that we have redundancy of interfaces to route the traffic, when the first link falls we lose internet connectivity throughout the organization. Any help is really appreciated. Regards.
Emanuel_Miut
Emanuel_Miut inside Enterprise Appliances and Gaia OS a week ago
views 594 13 1

Moving from 4400 (77.30) to 6500 (80.20)

Hi, We have a 4400 cluster(77.30) and planning to move to 6500 cluster (80.20).Management server was already moved to R80.20. New appliances 6500 were installed using isomorphic with R80.20For GW replacement I was thinking at the following steps. 1. Export configuration from 4400 appliances - show configuration and then save configuration to file;2. Import configuration to 6500 appliance - paste commands from 4400 appliances, verification of interfaces ;3. On management server, modify gateway object with 6500 apliance hardware and change software to R80.20;4. Establish SIC with 6500 appliances;5. Install policy on 6500 appliances. Are there any more steps to take into consideration? Regards,

Hardware for home-lab

Hi,I want to run R80.30 in my home lab and get all R80 features. Management will run on another remote server.What are you using? I am thinking on running Gaia on a NUC or other small PC and run vmware, or should I get an 1430 firewall?Any recommentations?
HeikoAnkenbrand
HeikoAnkenbrand inside Enterprise Appliances and Gaia OS a week ago
views 41017 19 74

R80.x Performance Tuning Tip – Multi Queue

What is Multi Queue? It is an acceleration feature that lets you assign more than one packet queue and CPU to an interface. When most of the traffic is accelerated by the SecureXL, the CPU load from the CoreXL SND instances can be very high, while the CPU load from the CoreXL FW instances can be very low. This is an inefficient utilization of CPU capacity. By default, the number of CPU cores allocated to CoreXL SND instances is limited by the number of network interfaces that handle the traffic. Because each interface has one traffic queue, only one CPU core can handle each traffic queue at a time. This means that each CoreXL SND instance can use only one CPU core at a time for each network interface. Check Point Multi-Queue lets you configure more than one traffic queue for each network interface. For each interface, you can use more than one CPU core (that runs CoreXL SND) for traffic acceleration. This balances the load efficiently between the CPU cores that run the CoreXL SND instances and the CPU cores that run CoreXL FW instances. Important - Multi-Queue applies only if SecureXL is enabled. Chapter Architecture:R80.x Security Gateway Architecture (Logical Packet Flow)R80.x Security Gateway Architecture (Content Inspection) R80.x Security Gateway Architecture (Acceleration Card Offloading) R80.x Ports Used for Communication by Various Check Point Modules Performance Tuning:R80.x Performance Tuning Tip - AES-NI R80.x Performance Tuning Tip - SMT (Hyper Threading) R80.x Performance Tuning Tip - Multi Queue R80.x Performance Tuning Tip - Connection Table R80.x Performance Tuning Tip - fw monitorR80.x Performance Tuning Tip - TCPDUMP vs. CPPCAP R80.x Performance Tuning Tip – DDoS „fw sam“ vs. „fwaccel dos“ Cheat Sheet:R80.x cheat sheet - fw monitor R80.x cheat sheet - ClusterXL More interesting articles:Article list (Heiko Ankenbrand) Multi-Queue Requirements and Limitations Tip 1 Multi-Queue is not supported on computers with one CPU core. Network interfaces must use the driver that supports Multi-Queue. Only network cards that use the igb (1Gb), ixgbe (10Gb), i40e (40Gb), or mlx5_core (40Gb) drivers support the Multi-Queue. You can configure a maximum of five interfaces with Multi-Queue. You must reboot the Security Gateway after all changes in the Multi-Queue configuration. For best performance, it is not recommended to assign both SND and a CoreXL FW instance to the same CPU core. Do not change the IRQ affinity of queues manually. Changing the IRQ affinity of the queues manually can adversely affect performance. Multi-Queue is relevant only if SecureXL and CoreXL is enabled. Do not change the IRQ affinity of queues manually. Changing the IRQ affinity of the queues manually can adversely affect performance. You cannot use the “sim affinity” or the “fw ctl affinity” commands to change and query the IRQ affinity of the Multi-Queue interfaces. The number of queues is limited by the number of CPU cores and the type of interface driver: Network card driver Speed Maximal number of RX queues igb 1 Gb 4 ixgbe 10 Gb 16 i40e 40 Gb 14 mlx5_core 40 Gb 10 The maximum RX queues limit dictates the largest number of SND/IRQ instances that can empty packet buffers for an individual interface using that driver that has Multi-Queue enabled. Multi-Queue does not work on 3200 / 5000 / 15000 / 23000 appliances in the following scenario (sk114625) MQ is enabled for on-board interfaces (e.g., Mgmt, Sync) the number of active RX queues was set to either 3 or 4 (with cpmq set rx_num igb <number> command) This problem was fixed in: Check Point R80.10 Jumbo Hotfix Accumulator for R77.30 - since Take_198 The number of traffic queues is limited by the number of CPU cores and the type of network interface card driver.The on-board interfaces on these appliances use the igb driver, which supports up to 4 RX queues.However, the I211 controller on these on-board interfaces supports only up to 2 RX queues. When Multi-Queue will not help Tip 2 When most of the processing is done in CoreXL - either in the Medium path, or in the Firewall path (Slow path). All current CoreXL FW instances are highly loaded, so there are no CPU cores that can be reassigned to SecureXL. When IPS, or other deep inspection Software Blades are heavily used. When all network interface cards are processing the same amount of traffic. When all CPU cores that are currently used by SecureXL are congested. When trying to increase traffic session rate. When there is not enough diversity of traffic flows. In the extreme case of a single flow, for example, traffic will be handled only by a single CPU core. (Clarification: The more traffic is passing to/from different ports/IP addresses, the more you benefit from Multi-Queue. If there is a single traffic flow from a single Client to a single Server, then Multi-Queue will not help.) Multi-Queue is recommended Load on CPU cores that run as SND is high (idle < 20%). Load on CPU cores that run CoreXL FW instances is low (idle > 50%). There are no CPU cores left to be assigned to the SND by changing interface affinity. Multi-Queue support on Appliance vs. Open Server Gateway type Network interfaces that support the Multi-Queue Check Point Appliance MQ is supported on all applications that use the following drivers igb, ixgbe, i40e, mlx5_core. These expansion line cards for 4000, 12000, and 21000 appliances support the Multi-Queue: CPAC-ACC-4-1C CPAC-ACC-4-1F CPAC-ACC-8-1C CPAC-ACC-2-10F CPAC-ACC-4-10F This expansion line card for 5000, 13000, and 23000 appliances supports the Multi-Queue: · CPAC-2-40F-B Open Server Network cards that use igb (1Gb), ixgbe (10Gb), i40e (40Gb), or mlx5_core (40Gb) drivers support the Multi-Queue. Multi-Queue support on Open Server (Intel Network Cards) Tip 3 The following list shows an overview with all Intel cards from Check Point HCL for open server from 11/21/2018. The list is cross-referenced to the Intel drivers. I do not assume any liability for the correctness of the information. These lists should only be used to help you find the right drivers. It is not an official document of Check Point! So please always read the official documents of Check Point. Intel network card Ports Chipset PCI ID Driver PCI Speed MQ 10 Gigabit AT 1 82598EB 8086:25e7 ixgbe PCI-E 10G Copper yes 10 Gigabit CX4 2 82598EB 8086:10ec ixgbe PCI-E 10G Copper yes 10 Gigabit XF family (Dual and Single Port models, SR and LR) 2 82598 8086:10c6 Ixgbe PCI-E 10G Fiber yes Ethernet Converged Network Adapter X540-T2 2 X540 8086:1528 ixgbe PCI-E 100/1G/10GCopper yes Ethernet Server Adapter I340-T2 2 82580 - Igb PCI-E 10/100/1GCopper yes Ethernet Server Adapter I340-T4 2 82580 - Igb PCI-E 10/100/1G Copper yes Ethernet Server Adapter X520 X520-SR2, X520-SR1, X520-LR1, X520-DA2 2 X520 - ixgbe PCI-E 10G Fiber yes Gigabit VT Quad Port Server Adapter 4 82575GB 8086:10d6 igb PCI-E 10/100/1G Copper yes Intel Gigabit ET2 Quad Port Server Adapter 4 - igb PCI-E 1G Copper yes PRO/10GbE CX4 1 82597EX 8086:109e Ixgb PCI-X 10G Copper no PRO/10GbE LR 1 82597EX 8086:1b48 Ixgb PCI-X 10G Fiber no PRO/10GbE SR 1 82597EX 8086:1a48 Ixgb PCI-X 10G Fiber no PRO/1000 Dual 82546GB 2 82546GB 8086:108a E1000 PCI-E 10/100/1G Copper no Pro/1000 EF Dual 2 82576 8086:10e6 Igb ? PCI-E 1G Fiber yes ? Pro/1000 ET Dual port Server Adapter 2 82576 igb PCI-E 1G Copper yes PRO/1000 ET Quad Port Server Adapter 4 82576 8086:10e8 Igb PCI-E 10/100/1G Copper yes PRO/1000 GT Quad 4 82546 8086:10b5 E1000 PCI-X 10/100/1G Copper no PRO/1000 MF 1 82546 ? 82545 ? - E1000 PCI-X 1G Fiber no PRO/1000 MF (LX) 1 82546 ? 82545 ? - E1000 PCI-X 1G Fiber no PRO/1000 MF Dual 2 82546 ? 82545 ? - E1000 PCI-X 1G Fiber no PRO/1000 MF Quad 4 82546 ? 82545 ? - E1000 PCI-X 1G Fiber no PRO/1000 PF 1 82571 ? 8086:107e E1000 PCI-E 1G Fiber no PRO/1000 PF Dual 2 82571 ? 8086:115f E1000 PCI-E 1G Fiber no PRO/1000 PF Quad Port Server Adapter 4 82571 ? 8086:10a5 E1000 PCI-E 1G Fiber no PRO/1000 PT 1 82571 8086:1082 E1000 PCI-E 10/100/1G Copper no PRO/1000 PT Dual 2 82571 8086:105e E1000 PCI-E 10/100/1G Copper no PRO/1000 PT Dual UTP 2 82571 8086:108a E1000 PCI-E 10/100/1G Copper no PRO/1000 PT Quad 4 82571 8086:10a4 E1000 PCI-E 10/100/1G Copper no PRO/1000 PT Quad Low Profile 4 82571 8086:10bc E1000 PCI-E 10/100/1G Copper no PRO/1000 XF 1 82544 E1000 PCI-X 1G Fiber no For all "?" I could not clarify the points exactly. Multi-Queue support on Open Server (HP and IBM Network Cards) Tip 4 The following list shows an overview with all HP cards from Check Point HCL for open server from 11/22/2018. The list is cross-referenced to the Intel drivers. I do not assume any liability for the correctness of the information. These lists should only be used to help you find the right drivers. It is not an official document of Check Point! So please always read the official documents of Check Point. HP network card Ports Chipset PCI ID Driver PCI Speed MQ Ethernet 1Gb 4-port 331T 4 BCM5719 14e4:1657 tg3 PCI-E 1G Copper no Ethernet 1Gb 4-port 366FLR 4 Intel I350 8086:1521 igb PCI-E 1G Copper yes Ethernet 1Gb 4-port 366T 4 Intel I350 8086:1521 igb PCI-E 1G Copper yes Ethernet 10Gb 2-port 560SFP+ 2 Intel 82599EB 0200: 8086:10fb ixgbe PCI-E 10G Fiber yes Ethernet 10Gb 2-port 561FLR-T 2 Intel X540-AT2 8086:1528 ixgbe PCI-E 10G Copper yes HPE Ethernet 10Gb 2-port 562FLR-SFP+ 2 Intel X710 8086:1572 i40e PCI-E 10G Copper yes Ethernet 10Gb 2-port 561T 2 Intel X540-AT2 8086:1528 ixgbe PCI-E 10G Copper yes NC110T 1 Intel 82572GI 8086:10b9 E1000 PCI-E 10/100/1G Copper no NC320T 1 BCM5721 KFB 14e4:1659 tg3 PCI-E 10/100/1G Copper no NC325m Quad Port 4 BCM5715S 14e4:1679 tg3 PCI-E 1G Copper no NC326m PCI Express Dual Port 1Gb Server Adapter for c-Class Blade System 2 BCM5715S tg3 PCI-E 1G Copper no NC340T 4 Intel 82546GB 8086:10b5 E1000 PCI-X 10/100/1G Copper no NC360T 2 Intel 82571EB 8086:105e E1000 PCI-E 10/100/1G Copper no NC364T Official site 4 Intel 82571EB 8086:10bc E1000 PCI-E 10/100/1G Copper no NC365T PCI Express Quad Port 4 Intel82580 8086:150e igb PCI-E 10/100/1G Copper yes NC373F 1 Broadcom 5708 14e4:16ac bnx2 PCI-E 1G Copper no NC373m Dual Port 2 BCM5708S 14e4:16ac bnx2 PCI-E 10/100/1G Copper no NC373T 1 Broadcom 5708 14e4:16ac bnx2 PCI-E 10/100/1G Copper no NC380T PCI Express Dual Port Multifunction Gigabit server 2 BCM5706 - bnx2 PCI-E 10/100/1G Copper no NC522SFP Dual Port 10GbE Server Adapter 2 NX3031 4040:0100 ??? PCI-E 10G Fiber no NC550SFP Dual Port 10GbE Server Adapter Official site 2 Emulex OneConn 19a2:0700 be2net PCI-E 10G Fiber no NC552SFP 10GbE 2-port Ethernet Server 2 Emulex OneConn 19a2:0710 be2net PCI-E 10G Fiber no NC7170 2 Intel 82546EB 8086:1010 E1000 PCI-X 10/100/1G Copper no For all "?" I could not clarify the points exactly. IBM network card Ports Chipset PCI ID Driver PCI Speed MQ Broadcom 10Gb 4-Port Ethernet Expansion Card (CFFh) for IBM BladeCenter 4 BCM57710 bnx2x PCI-E 10G Fiber no Broadcom NetXtreme Quad Port GbE network Adapter 4 I350 igb PCI-E 1G Copper yes NetXuleme 1000T 1 ??? (1) ??? PCI-X 10/100/1G Copper ??? NetXuleme 1000T Dual 2 ??? (1) ??? PCI-X 10/100/1G Copper ??? PRO/1000 PT Dual Port Server Adapter 2 82571GB E1000 PCI-E 10/100/1G Copper no (1) These network cards can't even be found at Goggle. Notes to Intel igb and ixgbe driver I used the LKDDb Database to identify the drivers. LKDDb is an attempt to build a comprensive database of hardware and protocols know by Linux kernels. The driver database includes numeric identifiers of hardware, the kernel configuration menu needed to build the driver and the driver filename. The database is build automagically from kernel sources, so it is very easy to have always the database updated. This was the basis of the cross-reverence between Check Point HCL and Intel drivers. Link to LKDDb web database:https://cateee.net/lkddb/web-lkddb/ Link to LKDDb database driver: igb, ixgbe, i40e, mlx5_core Here you can find the following output for all drivers e.g. igb: Numeric ID (from LKDDb) and names (from pci.ids) of recognized devices: vendor: 8086 ("Intel Corporation"), device: 0438 ("DH8900CC Series Gigabit Network Connection") vendor: 8086 ("Intel Corporation"), device: 10a9 ("82575EB Gigabit Backplane Connection") vendor: 8086 ("Intel Corporation"), device: 10c9 ("82576 Gigabit Network Connection") vendor: 8086 ("Intel Corporation"), device: 10d6 ("82575GB Gigabit Network Connection") and many more... How to recognize the driver With the ethtool you can display the version and type of the driver. For example for the interface eth0. # ethtool -i eth0 driver: igbversion: 2.1.0-k2firmware-version: 3.2-9bus-info: 0000:02:00.0 Active RX multi queues - formula By default, Security Gateway calculates the number of active RX queues based on this formula: RX queues = [Total Number of CPU cores] - [Number of CoreXL FW instances] Configure Here I would refer to the following links: Performance Tuning R80.10 Administratio GuidePerformance Tuning R80.20 Administration Guide References Best Practices - Security Gateway Performance Multi-Queue does not work on 3200 / 5000 / 15000 / 23000 appliances when it is enabled for on-board interfacesPerformance Tuning R80.10 Administratio GuidePerformance Tuning R80.20 Administration Guide Intel:Download Intel® Network Adapter Virtual Function Driver for Intel® 10 Gigabit Ethernet Network Connections Download Network Adapter Driver for Gigabit PCI Based Network Connections for Linux* Download Intel® Network Adapter Driver for 82575/6, 82580, I350, and I210/211-Based Gigabit Network Connections for Linu… LKDDb (Linux Kernel Driver Database):https://cateee.net/lkddb/web-lkddb/ Copyright by Heiko Ankenbrand 1994-2019

zabbix monitor fw kernel memory with snmp

Hi all I want to monitor fw kernel memory utilization with zabbix by snmp,but i can not found the oid,i just only found real memory iod,who can offer fw kernel memory oid give me ,thanks .