- Products
- Learn
- Local User Groups
- Partners
- More
Firewall Uptime, Reimagined
How AIOps Simplifies Operations and Prevents Outages
Introduction to Lakera:
Securing the AI Frontier!
Check Point Named Leader
2025 Gartner® Magic Quadrant™ for Hybrid Mesh Firewall
HTTPS Inspection
Help us to understand your needs better
CheckMates Go:
SharePoint CVEs and More!
What is Multi Queue? |
---|
It is an acceleration feature that lets you assign more than one packet queue and CPU to an interface.
When most of the traffic is accelerated by the SecureXL, the CPU load from the CoreXL SND instances can be very high, while the CPU load from the CoreXL FW instances can be very low. This is an inefficient utilization of CPU capacity.
By default, the number of CPU cores allocated to CoreXL SND instances is limited by the number of network interfaces that handle the traffic. Because each interface has one traffic queue, only one CPU core can handle each traffic queue at a time. This means that each CoreXL SND instance can use only one CPU core at a time for each network interface.
Check Point Multi-Queue lets you configure more than one traffic queue for each network interface. For each interface, you can use more than one CPU core (that runs CoreXL SND) for traffic acceleration. This balances the load efficiently between the CPU cores that run the CoreXL SND instances and the CPU cores that run CoreXL FW. Since R81 Multi Queue is enabled by default on all supported interfaces.
Important:
- Multi-Queue applies only if SecureXL is enabled.
- Since R81 Multi Queue is enabled by default on all supported interfaces.
Chapter |
---|
More interesting articles:
- R80.x Architecture and Performance Tuning - Link Collection
Multi-Queue Requirements and Limitations |
---|
Tip 1
R80.10 to R80.40:
Network card driver |
Speed |
Maximal number of RX queues |
igb |
1 Gb |
4 |
ixgbe |
10 Gb |
16 |
i40e |
40 Gb |
14 |
mlx5_core |
40 Gb |
10 |
vmxnet3 * |
10Gb |
8 |
* Since R80.30 with kernel 3.10 and with R80.40+
R81 Multi Queue news:
- Multi Queue is now fully automated:
- Multi Queue is enabled by default on all supported interfaces.
- The number of queues on each interface is determined automatically, based on the number of available CPUs (SNDs) and the NIC/driver limitations.
- Queues are automatically affined to the SND cores.
- Multi Queue configuration does not require a reboot in order to be applied.
- Multi Queue is now managed by the Clish command line.
- Multi Queue is now managed by the Out-of-the-box experience performance tool.
New supported Multi Queue drivers:
Driver | GAIA version | Speed [Gbps] | Description | Maximal Number of RX Queues |
igb | R80.10+ | 1 | Intel® PCIe 1 Gbps | 2-16 (depends on the interface) |
ixgbe | R80.10+ | 10 | Intel® PCIe 10 Gbps | 16 |
i40e | R80.10+ | 40 | Intel® PCIe 40 Gbps | 64 |
i40evf | R81 | 40 | Intel® i40e 40 Gbps | 4 |
mlx5_core | R80.10+ | 40 | Mellanox® ConnectX® mlx5 core driver | 60 |
ena | R81 | 20 | Elastic Network Adapter in Amazon® EC2 | configured automatically |
virtio_net | R81 | 10 | VirtIO paravirtualized device driver from KVM® | configured automatically |
vmxnet3 | R80.40+ | 10 | VMXNET Generation 3 driver from VMware® | configured automatically |
When Multi-Queue will not help |
---|
Tip 2
Multi-Queue is recommended |
---|
Multi-Queue support on Appliance vs. Open Server |
---|
Gateway type |
Network interfaces that support the Multi-Queue |
Check Point Appliance |
|
Open Server |
Network cards that use igb (1Gb), ixgbe (10Gb), i40e (40Gb), or mlx5_core (40Gb) drivers support the Multi-Queue. |
Multi-Queue support on Open Server (Intel Network Cards) |
---|
Tip 3
The following list shows an overview with all Intel cards from Check Point HCL for open server from 11/21/2018.
The list is cross-referenced to the Intel drivers. I do not assume any liability for the correctness of the information. These lists should only be used to help you find the right drivers. It is not an official document of Check Point!
So please always read the official documents of Check Point.
Intel network card |
Ports |
Chipset |
PCI ID |
Driver |
PCI |
Speed |
MQ |
1 |
82598EB |
8086:25e7 |
ixgbe |
PCI-E |
10G Copper |
yes |
|
2 |
82598EB |
8086:10ec |
ixgbe |
PCI-E |
10G Copper |
yes |
|
10 Gigabit XF family (Dual and Single Port models, SR and LR) |
2 |
82598 |
8086:10c6 |
Ixgbe |
PCI-E |
10G Fiber |
yes |
2 |
X540 |
8086:1528 |
ixgbe |
PCI-E |
100/1G/10G |
yes |
|
2 |
82580 |
- |
Igb |
PCI-E |
10/100/1G |
yes |
|
2 |
82580 |
- |
Igb |
PCI-E |
10/100/1G Copper |
yes |
|
X520-SR2, X520-SR1, X520-LR1, X520-DA2 |
2 |
X520 |
- |
ixgbe |
PCI-E |
10G Fiber |
yes |
4 |
82575GB |
8086:10d6 |
igb |
PCI-E |
10/100/1G Copper |
yes |
|
4 |
|
- |
igb |
PCI-E |
1G Copper |
yes |
|
1 |
82597EX |
8086:109e |
Ixgb |
PCI-X |
10G Copper |
no |
|
1 |
82597EX |
8086:1b48 |
Ixgb |
PCI-X |
10G Fiber |
no |
|
1 |
82597EX |
8086:1a48 |
Ixgb |
PCI-X |
10G Fiber |
no |
|
2 |
82546GB |
8086:108a |
E1000 |
PCI-E |
10/100/1G Copper |
no |
|
2 |
82576 |
8086:10e6 |
Igb ? |
PCI-E |
1G Fiber |
yes ? |
|
2 |
82576 |
|
igb |
PCI-E |
1G Copper |
yes |
|
4 |
82576 |
8086:10e8 |
Igb |
PCI-E |
10/100/1G Copper |
yes |
|
4 |
82546 |
8086:10b5 |
E1000 |
PCI-X |
10/100/1G Copper |
no |
|
1 |
82546 ? 82545 ? |
- |
E1000 |
PCI-X |
1G Fiber |
no |
|
1 |
82546 ? 82545 ? |
- |
E1000 |
PCI-X |
1G Fiber |
no |
|
2 |
82546 ? 82545 ? |
- |
E1000 |
PCI-X |
1G Fiber |
no |
|
4 |
82546 ? 82545 ? |
- |
E1000 |
PCI-X |
1G Fiber |
no |
|
1 |
82571 ? |
8086:107e |
E1000 |
PCI-E |
1G Fiber |
no |
|
2 |
82571 ? |
8086:115f |
E1000 |
PCI-E |
1G Fiber |
no |
|
4 |
82571 ? |
8086:10a5 |
E1000 |
PCI-E |
1G Fiber |
no |
|
1 |
82571 |
8086:1082 |
E1000 |
PCI-E |
10/100/1G Copper |
no |
|
2 |
82571 |
8086:105e |
E1000 |
PCI-E |
10/100/1G Copper |
no |
|
2 |
82571 |
8086:108a |
E1000 |
PCI-E |
10/100/1G Copper |
no |
|
4 |
82571 |
8086:10a4 |
E1000 |
PCI-E |
10/100/1G Copper |
no |
|
4 |
82571 |
8086:10bc |
E1000 |
PCI-E |
10/100/1G Copper |
no |
|
1 |
82544 |
|
E1000 |
PCI-X |
1G Fiber |
no |
For all "?" I could not clarify the points exactly.
Multi-Queue support on Open Server (HP and IBM Network Cards) |
---|
Tip 4
The following list shows an overview with all HP cards from Check Point HCL for open server from 11/22/2018.
The list is cross-referenced to the Intel drivers. I do not assume any liability for the correctness of the information. These lists should only be used to help you find the right drivers. It is not an official document of Check Point!
So please always read the official documents of Check Point.
HP network card |
Ports |
Chipset |
PCI ID |
Driver |
PCI |
Speed |
MQ |
4 |
BCM5719 |
14e4:1657 |
tg3 |
PCI-E |
1G Copper |
no |
|
4 |
Intel I350 |
8086:1521 |
igb |
PCI-E |
1G Copper |
yes |
|
4 |
Intel I350 |
8086:1521 |
igb |
PCI-E |
1G Copper |
yes |
|
2 |
Intel 82599EB |
0200: 8086:10fb |
ixgbe |
PCI-E |
10G Fiber |
yes |
|
2 |
Intel X540-AT2 |
8086:1528 |
ixgbe |
PCI-E |
10G Copper |
yes |
|
2 |
Intel X710 |
8086:1572 |
i40e |
PCI-E |
10G Copper |
yes |
|
2 |
Intel X540-AT2 |
8086:1528 |
ixgbe |
PCI-E |
10G Copper |
yes |
|
1 |
Intel 82572GI |
8086:10b9 |
E1000 |
PCI-E |
10/100/1G Copper |
no |
|
1 |
BCM5721 KFB |
14e4:1659 |
tg3 |
PCI-E |
10/100/1G Copper |
no |
|
4 |
BCM5715S |
14e4:1679 |
tg3 |
PCI-E |
1G Copper |
no |
|
NC326m PCI Express Dual Port 1Gb Server Adapter for c-Class Blade System |
2 |
BCM5715S |
|
tg3 |
PCI-E |
1G Copper |
no |
4 |
Intel 82546GB |
8086:10b5 |
E1000 |
PCI-X |
10/100/1G Copper |
no |
|
2 |
Intel 82571EB |
8086:105e |
E1000 |
PCI-E |
10/100/1G Copper |
no |
|
4 |
Intel 82571EB |
8086:10bc |
E1000 |
PCI-E |
10/100/1G Copper |
no |
|
4 |
Intel |
8086:150e |
igb |
PCI-E |
10/100/1G Copper |
yes |
|
1 |
Broadcom 5708 |
14e4:16ac |
bnx2 |
PCI-E |
1G Copper |
no |
|
2 |
BCM5708S |
14e4:16ac |
bnx2 |
PCI-E |
10/100/1G Copper |
no |
|
1 |
Broadcom 5708 |
14e4:16ac |
bnx2 |
PCI-E |
10/100/1G Copper |
no |
|
2 |
BCM5706 |
- |
bnx2 |
PCI-E |
10/100/1G Copper |
no |
|
2 |
NX3031 |
4040:0100 |
??? |
PCI-E |
10G Fiber |
no |
|
2 |
Emulex OneConn |
19a2:0700 |
be2net |
PCI-E |
10G Fiber |
no |
|
2 |
Emulex OneConn |
19a2:0710 |
be2net |
PCI-E |
10G Fiber |
no |
|
2 |
Intel 82546EB |
8086:1010 |
E1000 |
PCI-X |
10/100/1G Copper |
no |
For all "?" I could not clarify the points exactly.
IBM network card |
Ports |
Chipset |
PCI ID |
Driver |
PCI |
Speed |
MQ |
Broadcom 10Gb 4-Port Ethernet Expansion Card (CFFh) for IBM BladeCenter |
4 |
BCM57710 |
|
bnx2x |
PCI-E |
10G Fiber |
no |
4 |
I350 |
|
igb |
PCI-E |
1G Copper |
yes |
|
1 |
??? (1) |
|
??? |
PCI-X |
10/100/1G Copper |
??? |
|
2 |
??? (1) |
|
??? |
PCI-X |
10/100/1G Copper |
??? |
|
2 |
82571GB |
|
E1000 |
PCI-E |
10/100/1G Copper |
no |
(1) These network cards can't even be found at Goggle.
Notes to Intel igb and ixgbe driver |
---|
I used the LKDDb Database to identify the drivers. LKDDb is an attempt to build a comprensive database of hardware and protocols know by Linux kernels. The driver database includes numeric identifiers of hardware, the kernel configuration menu needed to build the driver and the driver filename. The database is build automagically from kernel sources, so it is very easy to have always the database updated. This was the basis of the cross-reverence between Check Point HCL and Intel drivers.
Link to LKDDb web database:
https://cateee.net/lkddb/web-lkddb/
Link to LKDDb database driver:
Here you can find the following output for all drivers e.g. igb:
Numeric ID (from LKDDb) and names (from pci.ids) of recognized devices:
8086
("Intel Corporation"), device: 0438
("DH8900CC Series Gigabit Network Connection")8086
("Intel Corporation"), device: 10a9
("82575EB Gigabit Backplane Connection")8086
("Intel Corporation"), device: 10c9
("82576 Gigabit Network Connection")8086
("Intel Corporation"), device: 10d6
("82575GB Gigabit Network Connection")and many more...
How to recognize the driver |
---|
With the ethtool you can display the version and type of the driver. For example for the interface eth0.
# ethtool -i eth0
driver: igb
version: 2.1.0-k2
firmware-version: 3.2-9
bus-info: 0000:02:00.0
Active RX multi queues - formula |
---|
By default, Security Gateway calculates the number of active RX queues based on this formula:
RX queues = [Total Number of CPU cores] - [Number of CoreXL FW instances]
Configure |
---|
Here I would refer to the following links:
Performance Tuning R80.10 Administratio Guide
Performance Tuning R80.20 Administration Guide
References |
---|
Best Practices - Security Gateway Performance
Multi-Queue does not work on 3200 / 5000 / 15000 / 23000 appliances when it is enabled for on-board ...
Performance Tuning R80.10 Administratio Guide
Performance Tuning R80.20 Administration Guide
Intel:
Download Intel® Network Adapter Virtual Function Driver for Intel® 10 Gigabit Ethernet Network Conne...
Download Network Adapter Driver for Gigabit PCI Based Network Connections for Linux*
Download Intel® Network Adapter Driver for 82575/6, 82580, I350, and I210/211-Based Gigabit Network ...
LKDDb (Linux Kernel Driver Database):
https://cateee.net/lkddb/web-lkddb/
About CheckMates
Learn Check Point
Advanced Learning
YOU DESERVE THE BEST SECURITY