Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 

R81 - Multi Queue (what's new)

Multi Queue is an acceleration feature that lets you assign more than one packet queue and CPU to an interface.

When most of the traffic is accelerated by the SecureXL, the CPU load from the CoreXL SND instances can be very high, while the CPU load from the CoreXL FW instances can be very low. This is an inefficient utilization of CPU capacity.

By default in R80.x the number of CPU cores allocated to CoreXL SND instances is limited by the number of network interfaces that handle the traffic. Because each interface has one traffic queue, only one CPU core can handle each traffic queue at a time. This means that each CoreXL SND instance can use only one CPU core at a time for each network interface. Check Point Multi-Queue lets you configure more than one traffic queue for each network interface. For each interface, you can use more than one CPU core (that runs CoreXL SND) for traffic acceleration. This balances the load efficiently between the CPU cores that run the CoreXL SND instances and the CPU cores that run CoreXL FW instances. Since R81 Multi Queue is enabled by default on all supported interfaces.

R81 Multi Queue:

- Multi Queue is now fully automated:
      - Multi Queue is enabled by default on all supported interfaces.
      - The number of queues on each interface is determined automatically, based on the number of available CPUs (SNDs) and the NIC/driver limitations.
      - Queues are automatically affined to the SND cores.
- Multi Queue configuration does not require a reboot in order to be applied.
- Multi Queue is now managed by the Clish command line.
- Multi Queue is now managed by the Out-of-the-box experience performance tool.

Clish command syntax in R81:

# show interface <interface name> multi-queue verbose                                                                      -> show the Multi-Queue configuration, including IRQ numbers
# set interface <interface name> multi-queue {off | auto | manual core <IDs of CPU cores>}         -> configure Multi-Queue

New supported Multi Queue drivers:

Driver GAIA version Speed [Gbps] Description Maximal Number of RX Queues
igb R80.10+ 1 Intel® PCIe 1 Gbps 2-16 (depends on the interface)
ixgbe R80.10+ 10 Intel® PCIe 10 Gbps 16
i40e R80.10+ 40 Intel® PCIe 40 Gbps 64
i40evf R81 40 Intel® i40e 40 Gbps 4
mlx5_core R80.20+ 40 Mellanox® ConnectX® mlx5 core driver 60
ena R81 20 Elastic Network Adapter in Amazon® EC2 configured automatically
virtio_net R81 10 VirtIO paravirtualized device driver from KVM® configured automatically
vmxnet3 R80.40+ 10 VMXNET Generation 3 driver from VMware® configured automatically
5 Replies
Admin
Admin

Pretty sure Multiqueue is enabled by default in R80.40 for supported interface types as well.
Nice to see we're adding support for newer interface types in R81 also.

Contributor

I just built a R80.40 GW - no jumbo applied yet (6400 appliance), and multi-queue was off by default.  There where three options:

  • - off (default)
  • - automatic
  • - manual

I set mine to automatic.

0 Kudos
Reply
Authority
Authority

It was on on our 6800. Interesting, must be number of total cores available that determines that

0 Kudos
Reply
Admin
Admin

I believe a 6400 only has four cores, so multiqueue doesn't make much sense.

0 Kudos
Reply
Champion
Champion

Right with 4 cores that would be a 1/3 default split, there is no point in enabling Multi-Queue if there is only one SND core.

Gaia 3.10 Immersion Self-paced Video Series
now available at http://www.maxpowerfirewalls.com
0 Kudos
Reply