- Products
- Learn
- Local User Groups
- Partners
- More
Welcome to Maestro Masters!
Talk to Masters, Engage with Masters, Be a Maestro Master!
Join our TechTalk: Malware 2021 to Present Day
Building a Preventative Cyber Program
Be a CloudMate!
Check out our cloud security exclusive space!
Check Point's Cyber Park is Now Open
Let the Games Begin!
As YOU DESERVE THE BEST SECURITY
Upgrade to our latest GA Jumbo
CheckFlix!
All Videos In One Space
What is Multi Queue? |
---|
It is an acceleration feature that lets you assign more than one packet queue and CPU to an interface.
When most of the traffic is accelerated by the SecureXL, the CPU load from the CoreXL SND instances can be very high, while the CPU load from the CoreXL FW instances can be very low. This is an inefficient utilization of CPU capacity.
By default, the number of CPU cores allocated to CoreXL SND instances is limited by the number of network interfaces that handle the traffic. Because each interface has one traffic queue, only one CPU core can handle each traffic queue at a time. This means that each CoreXL SND instance can use only one CPU core at a time for each network interface.
Check Point Multi-Queue lets you configure more than one traffic queue for each network interface. For each interface, you can use more than one CPU core (that runs CoreXL SND) for traffic acceleration. This balances the load efficiently between the CPU cores that run the CoreXL SND instances and the CPU cores that run CoreXL FW. Since R81 Multi Queue is enabled by default on all supported interfaces.
Important:
- Multi-Queue applies only if SecureXL is enabled.
- Since R81 Multi Queue is enabled by default on all supported interfaces.
Chapter |
---|
More interesting articles:
- R80.x Architecture and Performance Tuning - Link Collection
Multi-Queue Requirements and Limitations |
---|
Tip 1
R80.10 to R80.40:
Network card driver |
Speed |
Maximal number of RX queues |
igb |
1 Gb |
4 |
ixgbe |
10 Gb |
16 |
i40e |
40 Gb |
14 |
mlx5_core |
40 Gb |
10 |
vmxnet3 * |
10Gb |
8 |
* Since R80.30 with kernel 3.10 and with R80.40+
R81 Multi Queue news:
- Multi Queue is now fully automated:
- Multi Queue is enabled by default on all supported interfaces.
- The number of queues on each interface is determined automatically, based on the number of available CPUs (SNDs) and the NIC/driver limitations.
- Queues are automatically affined to the SND cores.
- Multi Queue configuration does not require a reboot in order to be applied.
- Multi Queue is now managed by the Clish command line.
- Multi Queue is now managed by the Out-of-the-box experience performance tool.
New supported Multi Queue drivers:
Driver | GAIA version | Speed [Gbps] | Description | Maximal Number of RX Queues |
igb | R80.10+ | 1 | Intel® PCIe 1 Gbps | 2-16 (depends on the interface) |
ixgbe | R80.10+ | 10 | Intel® PCIe 10 Gbps | 16 |
i40e | R80.10+ | 40 | Intel® PCIe 40 Gbps | 64 |
i40evf | R81 | 40 | Intel® i40e 40 Gbps | 4 |
mlx5_core | R80.10+ | 40 | Mellanox® ConnectX® mlx5 core driver | 60 |
ena | R81 | 20 | Elastic Network Adapter in Amazon® EC2 | configured automatically |
virtio_net | R81 | 10 | VirtIO paravirtualized device driver from KVM® | configured automatically |
vmxnet3 | R80.40+ | 10 | VMXNET Generation 3 driver from VMware® | configured automatically |
When Multi-Queue will not help |
---|
Tip 2
Multi-Queue is recommended |
---|
Multi-Queue support on Appliance vs. Open Server |
---|
Gateway type |
Network interfaces that support the Multi-Queue |
Check Point Appliance |
|
Open Server |
Network cards that use igb (1Gb), ixgbe (10Gb), i40e (40Gb), or mlx5_core (40Gb) drivers support the Multi-Queue. |
Multi-Queue support on Open Server (Intel Network Cards) |
---|
Tip 3
The following list shows an overview with all Intel cards from Check Point HCL for open server from 11/21/2018.
The list is cross-referenced to the Intel drivers. I do not assume any liability for the correctness of the information. These lists should only be used to help you find the right drivers. It is not an official document of Check Point!
So please always read the official documents of Check Point.
Intel network card |
Ports |
Chipset |
PCI ID |
Driver |
PCI |
Speed |
MQ |
1 |
82598EB |
8086:25e7 |
ixgbe |
PCI-E |
10G Copper |
yes |
|
2 |
82598EB |
8086:10ec |
ixgbe |
PCI-E |
10G Copper |
yes |
|
10 Gigabit XF family (Dual and Single Port models, SR and LR) |
2 |
82598 |
8086:10c6 |
Ixgbe |
PCI-E |
10G Fiber |
yes |
2 |
X540 |
8086:1528 |
ixgbe |
PCI-E |
100/1G/10G |
yes |
|
2 |
82580 |
- |
Igb |
PCI-E |
10/100/1G |
yes |
|
2 |
82580 |
- |
Igb |
PCI-E |
10/100/1G Copper |
yes |
|
X520-SR2, X520-SR1, X520-LR1, X520-DA2 |
2 |
X520 |
- |
ixgbe |
PCI-E |
10G Fiber |
yes |
4 |
82575GB |
8086:10d6 |
igb |
PCI-E |
10/100/1G Copper |
yes |
|
4 |
|
- |
igb |
PCI-E |
1G Copper |
yes |
|
1 |
82597EX |
8086:109e |
Ixgb |
PCI-X |
10G Copper |
no |
|
1 |
82597EX |
8086:1b48 |
Ixgb |
PCI-X |
10G Fiber |
no |
|
1 |
82597EX |
8086:1a48 |
Ixgb |
PCI-X |
10G Fiber |
no |
|
2 |
82546GB |
8086:108a |
E1000 |
PCI-E |
10/100/1G Copper |
no |
|
2 |
82576 |
8086:10e6 |
Igb ? |
PCI-E |
1G Fiber |
yes ? |
|
2 |
82576 |
|
igb |
PCI-E |
1G Copper |
yes |
|
4 |
82576 |
8086:10e8 |
Igb |
PCI-E |
10/100/1G Copper |
yes |
|
4 |
82546 |
8086:10b5 |
E1000 |
PCI-X |
10/100/1G Copper |
no |
|
1 |
82546 ? 82545 ? |
- |
E1000 |
PCI-X |
1G Fiber |
no |
|
1 |
82546 ? 82545 ? |
- |
E1000 |
PCI-X |
1G Fiber |
no |
|
2 |
82546 ? 82545 ? |
- |
E1000 |
PCI-X |
1G Fiber |
no |
|
4 |
82546 ? 82545 ? |
- |
E1000 |
PCI-X |
1G Fiber |
no |
|
1 |
82571 ? |
8086:107e |
E1000 |
PCI-E |
1G Fiber |
no |
|
2 |
82571 ? |
8086:115f |
E1000 |
PCI-E |
1G Fiber |
no |
|
4 |
82571 ? |
8086:10a5 |
E1000 |
PCI-E |
1G Fiber |
no |
|
1 |
82571 |
8086:1082 |
E1000 |
PCI-E |
10/100/1G Copper |
no |
|
2 |
82571 |
8086:105e |
E1000 |
PCI-E |
10/100/1G Copper |
no |
|
2 |
82571 |
8086:108a |
E1000 |
PCI-E |
10/100/1G Copper |
no |
|
4 |
82571 |
8086:10a4 |
E1000 |
PCI-E |
10/100/1G Copper |
no |
|
4 |
82571 |
8086:10bc |
E1000 |
PCI-E |
10/100/1G Copper |
no |
|
1 |
82544 |
|
E1000 |
PCI-X |
1G Fiber |
no |
For all "?" I could not clarify the points exactly.
Multi-Queue support on Open Server (HP and IBM Network Cards) |
---|
Tip 4
The following list shows an overview with all HP cards from Check Point HCL for open server from 11/22/2018.
The list is cross-referenced to the Intel drivers. I do not assume any liability for the correctness of the information. These lists should only be used to help you find the right drivers. It is not an official document of Check Point!
So please always read the official documents of Check Point.
HP network card |
Ports |
Chipset |
PCI ID |
Driver |
PCI |
Speed |
MQ |
4 |
BCM5719 |
14e4:1657 |
tg3 |
PCI-E |
1G Copper |
no |
|
4 |
Intel I350 |
8086:1521 |
igb |
PCI-E |
1G Copper |
yes |
|
4 |
Intel I350 |
8086:1521 |
igb |
PCI-E |
1G Copper |
yes |
|
2 |
Intel 82599EB |
0200: 8086:10fb |
ixgbe |
PCI-E |
10G Fiber |
yes |
|
2 |
Intel X540-AT2 |
8086:1528 |
ixgbe |
PCI-E |
10G Copper |
yes |
|
2 |
Intel X710 |
8086:1572 |
i40e |
PCI-E |
10G Copper |
yes |
|
2 |
Intel X540-AT2 |
8086:1528 |
ixgbe |
PCI-E |
10G Copper |
yes |
|
1 |
Intel 82572GI |
8086:10b9 |
E1000 |
PCI-E |
10/100/1G Copper |
no |
|
1 |
BCM5721 KFB |
14e4:1659 |
tg3 |
PCI-E |
10/100/1G Copper |
no |
|
4 |
BCM5715S |
14e4:1679 |
tg3 |
PCI-E |
1G Copper |
no |
|
NC326m PCI Express Dual Port 1Gb Server Adapter for c-Class Blade System |
2 |
BCM5715S |
|
tg3 |
PCI-E |
1G Copper |
no |
4 |
Intel 82546GB |
8086:10b5 |
E1000 |
PCI-X |
10/100/1G Copper |
no |
|
2 |
Intel 82571EB |
8086:105e |
E1000 |
PCI-E |
10/100/1G Copper |
no |
|
4 |
Intel 82571EB |
8086:10bc |
E1000 |
PCI-E |
10/100/1G Copper |
no |
|
4 |
Intel |
8086:150e |
igb |
PCI-E |
10/100/1G Copper |
yes |
|
1 |
Broadcom 5708 |
14e4:16ac |
bnx2 |
PCI-E |
1G Copper |
no |
|
2 |
BCM5708S |
14e4:16ac |
bnx2 |
PCI-E |
10/100/1G Copper |
no |
|
1 |
Broadcom 5708 |
14e4:16ac |
bnx2 |
PCI-E |
10/100/1G Copper |
no |
|
2 |
BCM5706 |
- |
bnx2 |
PCI-E |
10/100/1G Copper |
no |
|
2 |
NX3031 |
4040:0100 |
??? |
PCI-E |
10G Fiber |
no |
|
2 |
Emulex OneConn |
19a2:0700 |
be2net |
PCI-E |
10G Fiber |
no |
|
2 |
Emulex OneConn |
19a2:0710 |
be2net |
PCI-E |
10G Fiber |
no |
|
2 |
Intel 82546EB |
8086:1010 |
E1000 |
PCI-X |
10/100/1G Copper |
no |
For all "?" I could not clarify the points exactly.
IBM network card |
Ports |
Chipset |
PCI ID |
Driver |
PCI |
Speed |
MQ |
Broadcom 10Gb 4-Port Ethernet Expansion Card (CFFh) for IBM BladeCenter |
4 |
BCM57710 |
|
bnx2x |
PCI-E |
10G Fiber |
no |
4 |
I350 |
|
igb |
PCI-E |
1G Copper |
yes |
|
1 |
??? (1) |
|
??? |
PCI-X |
10/100/1G Copper |
??? |
|
2 |
??? (1) |
|
??? |
PCI-X |
10/100/1G Copper |
??? |
|
2 |
82571GB |
|
E1000 |
PCI-E |
10/100/1G Copper |
no |
(1) These network cards can't even be found at Goggle.
Notes to Intel igb and ixgbe driver |
---|
I used the LKDDb Database to identify the drivers. LKDDb is an attempt to build a comprensive database of hardware and protocols know by Linux kernels. The driver database includes numeric identifiers of hardware, the kernel configuration menu needed to build the driver and the driver filename. The database is build automagically from kernel sources, so it is very easy to have always the database updated. This was the basis of the cross-reverence between Check Point HCL and Intel drivers.
Link to LKDDb web database:
https://cateee.net/lkddb/web-lkddb/
Link to LKDDb database driver:
Here you can find the following output for all drivers e.g. igb:
Numeric ID (from LKDDb) and names (from pci.ids) of recognized devices:
8086
("Intel Corporation"), device: 0438
("DH8900CC Series Gigabit Network Connection")8086
("Intel Corporation"), device: 10a9
("82575EB Gigabit Backplane Connection")8086
("Intel Corporation"), device: 10c9
("82576 Gigabit Network Connection")8086
("Intel Corporation"), device: 10d6
("82575GB Gigabit Network Connection")and many more...
How to recognize the driver |
---|
With the ethtool you can display the version and type of the driver. For example for the interface eth0.
# ethtool -i eth0
driver: igb
version: 2.1.0-k2
firmware-version: 3.2-9
bus-info: 0000:02:00.0
Active RX multi queues - formula |
---|
By default, Security Gateway calculates the number of active RX queues based on this formula:
RX queues = [Total Number of CPU cores] - [Number of CoreXL FW instances]
Configure |
---|
Here I would refer to the following links:
Performance Tuning R80.10 Administratio Guide
Performance Tuning R80.20 Administration Guide
References |
---|
Best Practices - Security Gateway Performance
Multi-Queue does not work on 3200 / 5000 / 15000 / 23000 appliances when it is enabled for on-board ...
Performance Tuning R80.10 Administratio Guide
Performance Tuning R80.20 Administration Guide
Intel:
Download Intel® Network Adapter Virtual Function Driver for Intel® 10 Gigabit Ethernet Network Conne...
Download Network Adapter Driver for Gigabit PCI Based Network Connections for Linux*
Download Intel® Network Adapter Driver for 82575/6, 82580, I350, and I210/211-Based Gigabit Network ...
LKDDb (Linux Kernel Driver Database):
https://cateee.net/lkddb/web-lkddb/
That explains why you wanted the information about NICs
Hi Dameon,
this was not planned.
As I said I once referenced all network cards for 3 hours between Check Point HCL and Intel.
After that I wrote this article. I will complete and revise the article in the next few days.
Here is the link to the original discussion:
Open Server - HCL for multi queue network cards
Regards
Heiko
well done Heiko Ankenbrand
I noticed the following. The following drivers and network cards are not yet supported by Check Ponit Open Server HCL.
Is there a reason that 40Gbit/s network cards are not yet present in the HCL?
I think from a performance point of view, an open server with MQ should also be able to handle that.
What are the reasons why that are not supported?
Actual NICs supported by this driver i40e version according to Intel:
Intel® Ethernet Controller X710-AM2
Intel® Ethernet Controller X710-BM2
Intel® Ethernet Controller XL710-AM1
Intel® Ethernet Controller XL710-AM2
Intel® Ethernet Controller XL710-BM1
Intel® Ethernet Controller XL710-BM2
Intel® Ethernet Converged Network Adapter X710-DA2
Intel® Ethernet Converged Network Adapter X710-DA4
Intel® Ethernet Converged Network Adapter XL710-QDA1
Intel® Ethernet Converged Network Adapter XL710-QDA2
Regards
Heiko
nice list
Yes! The Intel Open Server network card list is very nice.
Thanks
Christian
And again 3 hours of driver search and referencing for HP network cards.
Regards
Heiko
Hi Heiko Ankenbrand,
First of all! Thank you very much for the information.
But I have one more question! How did you cross-reference the network cards?
What sources did you use?
Regards
Harry
Important clarification
We appreciate Heiko Ankenbrand's article on multi-queue and research. However, some points need to be mentioned:
Once more, thanks to the author for his contribution to this community. Keep up the good work.
Here I agree with Valeri Loukine. You should always look at the HCL and relevant SK's, readme and documentation. I was just trying to reference the HCL to the network card drivers.
Thank you Valeri, I have included the following in the article.
CUT>>>
The list is cross-referenced to the Intel drivers. I do not assume any liability for the correctness of the information. These lists should only be used to help you find the right drivers. It is not an official document of Check Point!
So please always read the official documents of Check Point.
<<<CUT
Regards
Heiko
I used the LKDDb Database to identify the drivers. LKDDb is an attempt to build a comprensive database of hardware and protocols know by Linux kernels. The driver database includes numeric identifiers of hardware, the kernel configuration menu needed to build the driver and the driver filename. The database is build automagically from kernel sources, so it is very easy to have always the database updated. This was the basis of the cross-reverence between Check Point HCL and Intel drivers.
Link to LKDDb web database:
https://cateee.net/lkddb/web-lkddb/
Link to LKDDb database driver:
Is a very interesting Weblink LKDDb web database:
https://cateee.net/lkddb/web-lkddb/
Thx för this article.
tg3 seems to support MQ on linux kernels >=3:
02:00.1 Ethernet controller: Broadcom Corporation NetXtreme BCM5720 Gigabit Ethernet PCIe
root@debian:~# ethtool -i eth0
driver: tg3
version: 3.137
firmware-version: FFV7.6.14 bc 5720-v1.31
bus-info: 0000:02:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: no
root@debian:~# ethtool -l eth0
Channel parameters for eth0:
Pre-set maximums:
RX: 4
TX: 4
Other: 0
Combined: 0
Current hardware settings:
RX: 4
TX: 4
Other: 0
Combined: 0
That would seem to be the case based on your ethtool output, but based on my very negative experiences in the past with Broadcom NICs I wouldn't trust them to carry any production traffic whatsoever, let alone a heavy volume of traffic that would require use of Multi-Queue. No thanks.
Please note that I don't suggest anyone to use Broadcom adapters and my post was only for an informative purpose, because this is the only good article on the Internet where people can get information on multi-queue supported devices and drivers. It would be good if people shared some information about different NIC's not described here.
Today I have tested the mlx5_core driver with Mellanox MCX4121A-XCAT ConnectX-4 Lx EN (dual 10G) and turns out it supports more than 10 RX queues, stated as 'Maximal number of RX queues' in this article:
# ethtool -i enp3s0f0
driver: mlx5_core
version: 3.0-1 (January 2015)
firmware-version: 14.25.1020
# ethtool -l enp3s0f0
Channel parameters for enp3s0f0:
Pre-set maximums:
RX: 0
TX: 0
Other: 0
Combined: 24
Current hardware settings:
RX: 0
TX: 0
Other: 0
Combined: 24
Hi,
we have been running the 25G nic for the last months in R80.30 3.10 customer release and now the normal GA version
If you want to add to the list 🙂
Intel(R) Ethernet Controller XXV710 for 25GbE SFP28
HPE Eth 10/25Gb 2p 661SFP28 Adptr
870825-B21
ethtool -i eth4
driver: i40e
version: 2.7.12
firmware-version: 6.00 0x800036cb 1.1747.0
expansion-rom-version:
bus-info: 0000:12:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes
ethtool eth4
Settings for eth4:
Supported ports: [ FIBRE ]
Supported link modes: 25000baseCR/Full
Supported pause frame use: Symmetric
Supports auto-negotiation: Yes
Supported FEC modes: Not reported
Advertised link modes: 25000baseCR/Full
Advertised pause frame use: No
Advertised auto-negotiation: Yes
Advertised FEC modes: Not reported
Speed: 25000Mb/s
Duplex: Full
Port: Direct Attach Copper
PHYAD: 0
Transceiver: internal
Auto-negotiation: off
Supports Wake-on: d
Wake-on: d
Current message level: 0x0000000f (15)
drv probe link timer
Link detected: yes
ethtool -l eth4
Channel parameters for eth4:
Pre-set maximums:
RX: 0
TX: 0
Other: 1
Combined: 8
Quick question,
are all appliance ports compatible with multi queueing?
Generally yes as long as it is an adapter using a supported driver (determined with ethtool -i (interface)). Supported drivers are:
Intel: igb (4 queues maximum), ixgbe (16 queues maximum) & i40e (48 queues maximum)
Mellanox: mlx5_core (Gaia 2.6.18 - 10 queues maximum, Gaia 3.10 - 48 queues maximum), mlx_compat (Gaia 2.6.18 - 10 queues maximum, Gaia 3.10 - 48 queues maximum)
Note that the driver (and possibly kernel version) will usually limit the maximum number of queues per interface, but there are some exceptions such as the onboard I211 interfaces that use the igb driver, yet the NIC hardware only supports 2 queues. See sk114625 for more details.
The limit of no more than 5 interfaces that can have Multi-Queue enabled simultaneously is a Gaia 2.6.18 kernel limitation, and has been lifted in Gaia 3.10.
There have also been some observations that newer Broadcom adapters (which are used on some open hardware firewalls) appear to support Multi-Queue, but I wouldn't trust Broadcom NIC hardware to perform even the most basic function (such as being violently smashed with a hammer) with any degree of stability or competence, let alone implement a feature like Multi-Queue.
Hello Heiko,
I'm currently working on making a complete list of all network cards that support Multi-Queue.
The only information I found was about expansion line cards. Does this mean there are no onboard cards that support Multi-Queue?
Thanks
Joshua.
Update:
Since R80.30 with kernel 3.10 and with R80.40 MQ is supported with vmxnet3 drivers.
About CheckMates
Learn Check Point
Advanced Learning
YOU DESERVE THE BEST SECURITY