Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
HeikoAnkenbrand
Champion Champion
Champion

R81 - Multi Queue (what's new)

Multi Queue is an acceleration feature that lets you assign more than one packet queue and CPU to an interface.

When most of the traffic is accelerated by the SecureXL, the CPU load from the CoreXL SND instances can be very high, while the CPU load from the CoreXL FW instances can be very low. This is an inefficient utilization of CPU capacity.

By default in R80.x the number of CPU cores allocated to CoreXL SND instances is limited by the number of network interfaces that handle the traffic. Because each interface has one traffic queue, only one CPU core can handle each traffic queue at a time. This means that each CoreXL SND instance can use only one CPU core at a time for each network interface. Check Point Multi-Queue lets you configure more than one traffic queue for each network interface. For each interface, you can use more than one CPU core (that runs CoreXL SND) for traffic acceleration. This balances the load efficiently between the CPU cores that run the CoreXL SND instances and the CPU cores that run CoreXL FW instances. Since R81 Multi Queue is enabled by default on all supported interfaces.

R81 Multi Queue:

- Multi Queue is now fully automated:
      - Multi Queue is enabled by default on all supported interfaces.
      - The number of queues on each interface is determined automatically, based on the number of available CPUs (SNDs) and the NIC/driver limitations.
      - Queues are automatically affined to the SND cores.
- Multi Queue configuration does not require a reboot in order to be applied.
- Multi Queue is now managed by the Clish command line.
- Multi Queue is now managed by the Out-of-the-box experience performance tool.

Clish command syntax in R81:

# show interface <interface name> multi-queue verbose                                                                      -> show the Multi-Queue configuration, including IRQ numbers
# set interface <interface name> multi-queue {off | auto | manual core <IDs of CPU cores>}         -> configure Multi-Queue

New supported Multi Queue drivers:

Driver GAIA version Speed [Gbps] Description Maximal Number of RX Queues
igb R80.10+ 1 Intel® PCIe 1 Gbps 2-16 (depends on the interface)
ixgbe R80.10+ 10 Intel® PCIe 10 Gbps 16
i40e R80.10+ 40 Intel® PCIe 40 Gbps 64
i40evf R81 40 Intel® i40e 40 Gbps 4
mlx5_core R80.20+ 40 Mellanox® ConnectX® mlx5 core driver 60
ena R81 20 Elastic Network Adapter in Amazon® EC2 configured automatically
virtio_net R81 10 VirtIO paravirtualized device driver from KVM® configured automatically
vmxnet3 R80.40+ 10 VMXNET Generation 3 driver from VMware® configured automatically
➜ CCSM Elite, CCME, CCTE
16 Replies
PhoneBoy
Admin
Admin

Pretty sure Multiqueue is enabled by default in R80.40 for supported interface types as well.
Nice to see we're adding support for newer interface types in R81 also.

genisis__
Leader Leader
Leader

I just built a R80.40 GW - no jumbo applied yet (6400 appliance), and multi-queue was off by default.  There where three options:

  • - off (default)
  • - automatic
  • - manual

I set mine to automatic.

0 Kudos
Kaspars_Zibarts
Employee Employee
Employee

It was on on our 6800. Interesting, must be number of total cores available that determines that

0 Kudos
PhoneBoy
Admin
Admin

I believe a 6400 only has four cores, so multiqueue doesn't make much sense.

0 Kudos
Timothy_Hall
Champion
Champion

Right with 4 cores that would be a 1/3 default split, there is no point in enabling Multi-Queue if there is only one SND core.

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
0 Kudos
Luis_Miguel_Mig
Advisor

I have been given two gateways with 4 cores and a license for 4 cores. 
With multiqueue now, would you recommend to upgrade to 6 cores or 8 cores  to make use of multiqueue (keeping the license of 4 cores)?

0 Kudos
Timothy_Hall
Champion
Champion

I haven't checked this behavior since moving to Gaia 3.10 in R80.40+, but the number of cores licensed will dictate total number of cores that can be part of your CoreXL split which includes both SND/IRQ cores and Firewall Worker cores.  So I don't think the cores above 4 will be useful if you aren't licensed for them, other than for use by generic Gaia OS processes.

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
0 Kudos
Luis_Miguel_Mig
Advisor

Good point. I am testing in the lab in an environment with 6 cores, I have configured 3 workers, I can see multiqueue active and 3 SND in the other three cores.
But I forgot the SND count aswell and therefore with the real 4 core license I would only see one SND. 

I was wondering if it would be possible to emulate my environment (with only four licensed cores) using temporarily licenses. Otherwise, I won't even be able to test the production configuration in the lab. I will only be able to test when I swap the production license.

0 Kudos
HuseinS
Participant

Nice information!
Are there more drivers that support MQ?

0 Kudos
_Val_
Admin
Admin

From sk153373

Multi-queue is supported on the following drivers

 
Driver Max Speed (Gbps) Description Maximal Number of RX Queues
igb 1 Intel® Network Adapter Driver for PCIe 1 Gigabit Ethernet Network 2-16 (depends on the interface)
ixgbe 10 Intel® Network Adapter Driver for PCIe 10 Gigabit Ethernet Network 16
i40e 40 Intel® Network Adapter Driver for PCIe 40 Gigabit Ethernet Network 64
i40evf 40 Intel® i40e driver for Virtual Function Network Devices 4
mlx5_core 40 Mellanox® ConnectX® mlx5 core driver 60
ena 20 Elastic Network Adapter in Amazon® EC2 Configured automatically
virtio_net 10 VirtIO paravirtualized device driver from KVM® Configured automatically
vmxnet3 10 VMXNET Generation 3 driver from VMware® Configured automatically

 

0 Kudos
Timothy_Hall
Champion
Champion

While the number of queues used by vmxnet3 is configured automatically, the ultimate maximum is 8 due to a driver limitation. 

Confirmed it by setting up a 12-core firewall with a 10/2 split as shown below, and I have already left feedback on sk153373:

[Expert@gw-c9e604:0]# mq_mng --show -vv
Total 12 cores. Multiqueue 10 cores: 0,1,2,3,4,5,6,7,8,9
i/f type state mode cores
------------------------------------------------------------------------------------------------
eth0 vmxnet3 Up Auto (8/8) 0(59),1(60),2(61),3(62),4(63),
5(64),6(65),7(66)
eth1 vmxnet3 Up Auto (8/8) 0-11(68),0-11(69),0-11(70),0-1
1(71),0-11(72),0-11(73),0-11(7
4),0-11(75)

------------------------------------------------------------------------------------------------
eth0 <vmxnet3> max 9999 cur 0
03:00.0 Ethernet controller: VMware VMXNET3 Ethernet Controller (rev 01)
------------------------------------------------------------------------------------------------
eth1 <vmxnet3> max 9999 cur 0
0b:00.0 Ethernet controller: VMware VMXNET3 Ethernet Controller (rev 01)

core interfaces queue irq rx packets tx packets
------------------------------------------------------------------------------------------------
0-11 eth1 eth1-rxtx-0 68 0 0
eth1-rxtx-1 69 0 0
eth1-rxtx-2 70 0 0
eth1-rxtx-3 71 0 0
eth1-rxtx-4 72 0 0
eth1-rxtx-5 73 0 0
eth1-rxtx-6 74 0 0
eth1-rxtx-7 75 0 0
0 eth0 eth0-rxtx-0 59 222 0
1 eth0 eth0-rxtx-1 60 0 0
2 eth0 eth0-rxtx-2 61 0 54
3 eth0 eth0-rxtx-3 62 0 16
4 eth0 eth0-rxtx-4 63 0 0
5 eth0 eth0-rxtx-5 64 0 1
6 eth0 eth0-rxtx-6 65 0 0
7 eth0 eth0-rxtx-7 66 0 0

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
0 Kudos
HeikoAnkenbrand
Champion Champion
Champion

Hi @Timothy_Hall,

when I search in the VMWare KB, I often find information with 8 cores. But I can't find any info that confirms that.
Do you have a reference here where we can read this information?

Regards
Heiko

➜ CCSM Elite, CCME, CCTE
0 Kudos
Timothy_Hall
Champion
Champion

The best response from VMWare itself I've been able to find is here, as it turns out the vmxnet3 driver itself supports up to 32 queues but Linux itself can only support 8:

https://sourceforge.net/p/open-vm-tools/mailman/open-vm-tools-discuss/thread/CALqauyAzi7uKQJ4ir5JPVk...

There is also a good article here:

https://www.reddit.com/r/vmware/comments/ako0ei/vm_has_8_rxqueues_only_4_are_used/

 

 

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
0 Kudos
_Val_
Admin
Admin

@Timothy_Hall I suspect "automatically" should be translated into "it depends" 🙂

0 Kudos
Daniel_Kavan
Advisor

Thank you for the command!

I'm getting this error when I run it.     Maybe, the driver isn't supported?   

set interface eth10 multi-queue auto

error: Interface eth10 does not exist (yes it does exist though - it's my ext interface)

 

What command is everyone using to see what driver is being used?

0 Kudos
Daniel_
Advisor

Please provide output of

ethtool -i eth10

ethtool -l eth10

0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events