It is an acceleration feature that lets you assign more than one packet queue and CPU to an interface.
When most of the traffic is accelerated by the SecureXL, the CPU load from the CoreXL SND instances can be very high, while the CPU load from the CoreXL FW instances can be very low. This is an inefficient utilization of CPU capacity.
By default, the number of CPU cores allocated to CoreXL SND instances is limited by the number of network interfaces that handle the traffic. Because each interface has one traffic queue, only one CPU core can handle each traffic queue at a time. This means that each CoreXL SND instance can use only one CPU core at a time for each network interface.
Check Point Multi-Queue lets you configure more than one traffic queue for each network interface. For each interface, you can use more than one CPU core (that runs CoreXL SND) for traffic acceleration. This balances the load efficiently between the CPU cores that run the CoreXL SND instances and the CPU cores that run CoreXL FW. Since R81 Multi Queue is enabled by default on all supported interfaces.
Important:
- Multi-Queue applies only if SecureXL is enabled.
- Since R81 Multi Queue is enabled by default on all supported interfaces.
More interesting articles:
- R80.x Architecture and Performance Tuning - Link Collection
Multi-Queue Requirements and Limitations |
Tip 1
R80.10 to R80.40:
- Multi-Queue is not supported on computers with one CPU core.
- Network interfaces must use the driver that supports Multi-Queue. Only network cards that use the igb (1Gb), ixgbe (10Gb), i40e (40Gb), or mlx5_core (40Gb) drivers support the Multi-Queue.
- You can configure a maximum of five interfaces with Multi-Queue.
- You must reboot the Security Gateway after all changes in the Multi-Queue configuration.
- For best performance, it is not recommended to assign both SND and a CoreXL FW instance to the same CPU core.
- Do not change the IRQ affinity of queues manually. Changing the IRQ affinity of the queues manually can adversely affect performance.
- Multi-Queue is relevant only if SecureXL and CoreXL is enabled.
- Do not change the IRQ affinity of queues manually. Changing the IRQ affinity of the queues manually can adversely affect performance.
- You cannot use the “sim affinity” or the “fw ctl affinity” commands to change and query the IRQ affinity of the Multi-Queue interfaces.
- Since R80.30 with kernel 3.10 and with R80.40 MQ is supported with vmxnet3 drivers
- The number of queues is limited by the number of CPU cores and the type of interface driver:
Network card driver
|
Speed
|
Maximal number of RX queues
|
igb
|
1 Gb
|
4
|
ixgbe
|
10 Gb
|
16
|
i40e
|
40 Gb
|
14
|
mlx5_core
|
40 Gb
|
10
|
vmxnet3 *
|
10Gb
|
8
|
* Since R80.30 with kernel 3.10 and with R80.40+
- The maximum RX queues limit dictates the largest number of SND/IRQ instances that can empty packet buffers for an individual interface using that driver that has Multi-Queue enabled.
- Multi-Queue does not work on 3200 / 5000 / 15000 / 23000 appliances in the following scenario (sk114625)
-
- MQ is enabled for on-board interfaces (e.g., Mgmt, Sync)
- the number of active RX queues was set to either 3 or 4 (with cpmq set rx_num igb <number> command)
- This problem was fixed in:
- The number of traffic queues is limited by the number of CPU cores and the type of network interface card driver.
The on-board interfaces on these appliances use the igb driver, which supports up to 4 RX queues.
However, the I211 controller on these on-board interfaces supports only up to 2 RX queues.
R81 Multi Queue news:
- Multi Queue is now fully automated:
- Multi Queue is enabled by default on all supported interfaces.
- The number of queues on each interface is determined automatically, based on the number of available CPUs (SNDs) and the NIC/driver limitations.
- Queues are automatically affined to the SND cores.
- Multi Queue configuration does not require a reboot in order to be applied.
- Multi Queue is now managed by the Clish command line.
- Multi Queue is now managed by the Out-of-the-box experience performance tool.
New supported Multi Queue drivers:
Driver |
GAIA version |
Speed [Gbps] |
Description |
Maximal Number of RX Queues |
igb |
R80.10+ |
1 |
Intel® PCIe 1 Gbps |
2-16 (depends on the interface) |
ixgbe |
R80.10+ |
10 |
Intel® PCIe 10 Gbps |
16 |
i40e |
R80.10+ |
40 |
Intel® PCIe 40 Gbps |
64 |
i40evf |
R81 |
40 |
Intel® i40e 40 Gbps |
4 |
mlx5_core |
R80.10+ |
40 |
Mellanox® ConnectX® mlx5 core driver |
60 |
ena |
R81 |
20 |
Elastic Network Adapter in Amazon® EC2 |
configured automatically |
virtio_net |
R81 |
10 |
VirtIO paravirtualized device driver from KVM® |
configured automatically |
vmxnet3 |
R80.40+ |
10 |
VMXNET Generation 3 driver from VMware® |
configured automatically |
When Multi-Queue will not help |
Tip 2
- When most of the processing is done in CoreXL - either in the Medium path, or in the Firewall path (Slow path).
- All current CoreXL FW instances are highly loaded, so there are no CPU cores that can be reassigned to SecureXL.
- When IPS, or other deep inspection Software Blades are heavily used.
- When all network interface cards are processing the same amount of traffic.
- When all CPU cores that are currently used by SecureXL are congested.
- When trying to increase traffic session rate.
- When there is not enough diversity of traffic flows. In the extreme case of a single flow, for example, traffic will be handled only by a single CPU core. (Clarification: The more traffic is passing to/from different ports/IP addresses, the more you benefit from Multi-Queue. If there is a single traffic flow from a single Client to a single Server, then Multi-Queue will not help.)
Multi-Queue is recommended |
- Load on CPU cores that run as SND is high (idle < 20%).
- Load on CPU cores that run CoreXL FW instances is low (idle > 50%).
- There are no CPU cores left to be assigned to the SND by changing interface affinity.
Multi-Queue support on Appliance vs. Open Server |
Gateway type
|
Network interfaces that support the Multi-Queue
|
Check Point Appliance
|
- MQ is supported on all applications that use the following drivers igb, ixgbe, i40e, mlx5_core.
- These expansion line cards for 4000, 12000, and 21000 appliances support the Multi-Queue:
- CPAC-ACC-4-1C
- CPAC-ACC-4-1F
- CPAC-ACC-8-1C
- CPAC-ACC-2-10F
- CPAC-ACC-4-10F
- This expansion line card for 5000, 13000, and 23000 appliances supports the Multi-Queue:
|
Open Server
|
Network cards that use igb (1Gb), ixgbe (10Gb), i40e (40Gb), or mlx5_core (40Gb) drivers support the Multi-Queue.
|
Multi-Queue support on Open Server (Intel Network Cards) |
Tip 3
The following list shows an overview with all Intel cards from Check Point HCL for open server from 11/21/2018.
The list is cross-referenced to the Intel drivers. I do not assume any liability for the correctness of the information. These lists should only be used to help you find the right drivers. It is not an official document of Check Point!
So please always read the official documents of Check Point.
For all "?" I could not clarify the points exactly.
Multi-Queue support on Open Server (HP and IBM Network Cards) |
Tip 4
The following list shows an overview with all HP cards from Check Point HCL for open server from 11/22/2018.
The list is cross-referenced to the Intel drivers. I do not assume any liability for the correctness of the information. These lists should only be used to help you find the right drivers. It is not an official document of Check Point!
So please always read the official documents of Check Point.
For all "?" I could not clarify the points exactly.
(1) These network cards can't even be found at Goggle.
Notes to Intel igb and ixgbe driver |
I used the LKDDb Database to identify the drivers. LKDDb is an attempt to build a comprensive database of hardware and protocols know by Linux kernels. The driver database includes numeric identifiers of hardware, the kernel configuration menu needed to build the driver and the driver filename. The database is build automagically from kernel sources, so it is very easy to have always the database updated. This was the basis of the cross-reverence between Check Point HCL and Intel drivers.
Link to LKDDb web database:
https://cateee.net/lkddb/web-lkddb/
Link to LKDDb database driver:
igb, ixgbe, i40e, mlx5_core
Here you can find the following output for all drivers e.g. igb:
Numeric ID (from LKDDb) and names (from pci.ids) of recognized devices:
- vendor:
8086
("Intel Corporation"), device: 0438
("DH8900CC Series Gigabit Network Connection")
- vendor:
8086
("Intel Corporation"), device: 10a9
("82575EB Gigabit Backplane Connection")
- vendor:
8086
("Intel Corporation"), device: 10c9
("82576 Gigabit Network Connection")
- vendor:
8086
("Intel Corporation"), device: 10d6
("82575GB Gigabit Network Connection")
and many more...
How to recognize the driver |
With the ethtool you can display the version and type of the driver. For example for the interface eth0.
# ethtool -i eth0
driver: igb
version: 2.1.0-k2
firmware-version: 3.2-9
bus-info: 0000:02:00.0
Active RX multi queues - formula |
By default, Security Gateway calculates the number of active RX queues based on this formula:
RX queues = [Total Number of CPU cores] - [Number of CoreXL FW instances]
Here I would refer to the following links:
Performance Tuning R80.10 Administratio Guide
Performance Tuning R80.20 Administration Guide
Best Practices - Security Gateway Performance
Multi-Queue does not work on 3200 / 5000 / 15000 / 23000 appliances when it is enabled for on-board ...
Performance Tuning R80.10 Administratio Guide
Performance Tuning R80.20 Administration Guide
Intel:
Download Intel® Network Adapter Virtual Function Driver for Intel® 10 Gigabit Ethernet Network Conne...
Download Network Adapter Driver for Gigabit PCI Based Network Connections for Linux*
Download Intel® Network Adapter Driver for 82575/6, 82580, I350, and I210/211-Based Gigabit Network ...
LKDDb (Linux Kernel Driver Database):
https://cateee.net/lkddb/web-lkddb/
➜ CCSM Elite, CCME, CCTE ➜ www.checkpoint.tips