Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Josh_Smith
Participant

Multi-Queue and LACP configuration

In the ClusterXL Admin Guide it states when utilizing Link Aggregation "To get the best performance, use static affinity for Link Aggregation", where it shows and recommends examples where you set the affinities for the bond interfaces to different cores.  This makes sense to  me as you would not want a LACP bond to have the slave interfaces pinned to the same cpu core.  Example below:

lacp_affinity.png

However with Multi-Queue various documentation states NOT to manually set affinities as it will cause performances issues. 

If this is the case, is it safe to have Multi-Queue enabled on 10gb interfaces that are a part of a lacp bond where the queue map to the same CPU cores?  Specifically I have two LACP bond interfaces consisting of 2 10gb interfaces with Multi-Queue enable on all four 10gb interfaces.  Bond to Interface to CPU mapping below:

BondPortCPUMapping.png 

3 Replies
Timothy_Hall
Legend Legend
Legend

The recommendation in the ClusterXL guide to use static interface affinities is outdated.  It assumes that SecureXL is disabled (and thus automatic interface affinity is not active at all) or that automatic interface affinity does not do a good job of balancing traffic among the interfaces.  This latter assumption was definitely the case in R76 and earlier, but automatic interface affinity was substantially improved in R77+ and I have not needed to set static interface affinities for quite a long time.

Multi-Queue does not directly care about bond/aggregate interfaces, it is simply enabled on the underlying physical interfaces.  MQ simply allows all SND/IRQ cores (up to certain limits) to have their own queues for an enabled interface that they empty independently.  The packets associated with a single connection are always "stuck" to the same queue/core every time to avoid out of order delivery, and I assume there is some kind of balancing performed for new connections among the queues for a particular interface.  You would most definitely NOT want any kind of static interface affinities defined on an interface with Multi-Queue enabled, as doing so would interfere with the Multi-Queue sticking/balancing mechanism.  The likely result would be overloading of individual SND/IRQ cores, and even possibly out-of-order packet delivery which is very undesirable.

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
HeikoAnkenbrand
Champion Champion
Champion

 

What is Multi Queue?

It is an acceleration feature that lets you assign more than one packet queue and CPU to an interface.

When most of the traffic is accelerated by the SecureXL, the CPU load from the CoreXL SND instances can be very high, while the CPU load from the CoreXL FW instances can be very low. This is an inefficient utilization of CPU capacity.

By default, the number of CPU cores allocated to CoreXL SND instances is limited by the number of network interfaces that handle the traffic. Because each interface has one traffic queue, only one CPU core can handle each traffic queue at a time. This means that each CoreXL SND instance can use only one CPU core at a time for each network interface.

Check Point Multi-Queue lets you configure more than one traffic queue for each network interface. For each interface, you can use more than one CPU core (that runs CoreXL SND) for traffic acceleration. This balances the load efficiently between the CPU cores that run the CoreXL SND instances and the CPU cores that run CoreXL FW instances.

Important - Multi-Queue applies only if SecureXL is enabled.

Multi-Queue Requirements and Limitations
  • Multi-Queue is not supported on computers with one CPU core.
  • Network interfaces must use the driver that supports Multi-Queue. Only network cards that use the igb (1Gb), ixgbe (10Gb), i40e (40Gb), or mlx5_core (40Gb) drivers support the Multi-Queue.
  • You can configure a maximum of five interfaces with Multi-Queue.
  • You must reboot the Security Gateway after all changes in the Multi-Queue configuration.
  •       For best performance, it is not recommended to assign both SND and a CoreXL FW instance to the same CPU core.
  • Do not change the IRQ affinity of queues manually. Changing the IRQ affinity of the queues manually can adversely affect performance.
  • Multi-Queue is relevant only if SecureXL and CoreXL is enabled.
  • Do not change the IRQ affinity of queues manually. Changing the IRQ affinity of the queues manually can adversely affect performance.
  • You cannot use the “sim affinity” or the  “fw ctl affinity” commands to change and query the IRQ affinity of the Multi-Queue interfaces.
  • The number of queues is limited by the number of CPU cores and the type of interface driver:

Network card driver

Speed

Maximal number of RX queues

igb

1 Gb

4

ixgbe

10 Gb

16

i40e

40 Gb

14

mlx5_core

40 Gb

10

  •     The maximum RX queues limit dictates the largest number of SND/IRQ instances that can empty packet buffers for an individual interface using that driver that has Multi-Queue enabled. 
  • Multi-Queue does not work on 3200 / 5000 / 15000 / 23000 appliances in the following scenario (sk114625)
➜ CCSM Elite, CCME, CCTE ➜ www.checkpoint.tips
0 Kudos
HeikoAnkenbrand
Champion Champion
Champion

More informations about Multi-Queue you found here:

R80.x Performance Tuning Tip – Multi Queue

➜ CCSM Elite, CCME, CCTE ➜ www.checkpoint.tips
0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events