Who rated this post

cancel
Showing results for 
Search instead for 
Did you mean: 
HeikoAnkenbrand
Champion Champion
Champion

R80.x Performance Tuning Tip – Multi Queue

What is Multi Queue?

 

It is an acceleration feature that lets you assign more than one packet queue and CPU to an interface.

When most of the traffic is accelerated by the SecureXL, the CPU load from the CoreXL SND instances can be very high, while the CPU load from the CoreXL FW instances can be very low. This is an inefficient utilization of CPU capacity.

By default, the number of CPU cores allocated to CoreXL SND instances is limited by the number of network interfaces that handle the traffic. Because each interface has one traffic queue, only one CPU core can handle each traffic queue at a time. This means that each CoreXL SND instance can use only one CPU core at a time for each network interface.

Check Point Multi-Queue lets you configure more than one traffic queue for each network interface. For each interface, you can use more than one CPU core (that runs CoreXL SND) for traffic acceleration. This balances the load efficiently between the CPU cores that run the CoreXL SND instances and the CPU cores that run CoreXL FW. Since R81 Multi Queue is enabled by default on all supported interfaces.

Important:
-
Multi-Queue applies only if SecureXL is enabled.
- Since R81 Multi Queue is enabled by default on all supported interfaces.

Chapter

More interesting articles:

- R80.x Architecture and Performance Tuning - Link Collection

Multi-Queue Requirements and Limitations

Tip 1

R80.10 to R80.40:

  • Multi-Queue is not supported on computers with one CPU core.
  • Network interfaces must use the driver that supports Multi-Queue. Only network cards that use the igb (1Gb), ixgbe (10Gb), i40e (40Gb), or mlx5_core (40Gb) drivers support the Multi-Queue.
  • You can configure a maximum of five interfaces with Multi-Queue.
  • You must reboot the Security Gateway after all changes in the Multi-Queue configuration.
  •       For best performance, it is not recommended to assign both SND and a CoreXL FW instance to the same CPU core.
  • Do not change the IRQ affinity of queues manually. Changing the IRQ affinity of the queues manually can adversely affect performance.
  • Multi-Queue is relevant only if SecureXL and CoreXL is enabled.
  • Do not change the IRQ affinity of queues manually. Changing the IRQ affinity of the queues manually can adversely affect performance.
  • You cannot use the “sim affinity” or the  “fw ctl affinity” commands to change and query the IRQ affinity of the Multi-Queue interfaces.
  • Since R80.30 with kernel 3.10 and with R80.40 MQ is supported with vmxnet3 drivers
  • The number of queues is limited by the number of CPU cores and the type of interface driver:

Network card driver

Speed

Maximal number of RX queues

igb

1 Gb

4

ixgbe

10 Gb

16

i40e

40 Gb

14

mlx5_core

40 Gb

10

vmxnet3 *

10Gb

8

              * Since R80.30 with kernel 3.10 and with R80.40+

  •      The maximum RX queues limit dictates the largest number of SND/IRQ instances that can empty packet buffers for an individual interface using that driver that has Multi-Queue enabled. 
  • Multi-Queue does not work on 3200 / 5000 / 15000 / 23000 appliances in the following scenario (sk114625)
  • The number of traffic queues is limited by the number of CPU cores and the type of network interface card driver.
    The on-board interfaces on these appliances use the igb driver, which supports up to 4 RX queues.
    However, the I211 controller on these on-board interfaces supports only up to 2 RX queues.

R81 Multi Queue news:

- Multi Queue is now fully automated:
      - Multi Queue is enabled by default on all supported interfaces.
      - The number of queues on each interface is determined automatically, based on the number of available CPUs (SNDs) and the NIC/driver limitations.
      - Queues are automatically affined to the SND cores.
- Multi Queue configuration does not require a reboot in order to be applied.
- Multi Queue is now managed by the Clish command line.
- Multi Queue is now managed by the Out-of-the-box experience performance tool.

New supported Multi Queue drivers:

Driver GAIA version Speed [Gbps] Description Maximal Number of RX Queues
igb R80.10+ 1 Intel® PCIe 1 Gbps 2-16 (depends on the interface)
ixgbe R80.10+ 10 Intel® PCIe 10 Gbps 16
i40e R80.10+ 40 Intel® PCIe 40 Gbps 64
i40evf R81 40 Intel® i40e 40 Gbps 4
mlx5_core R80.10+ 40 Mellanox® ConnectX® mlx5 core driver 60
ena R81 20 Elastic Network Adapter in Amazon® EC2 configured automatically
virtio_net R81 10 VirtIO paravirtualized device driver from KVM® configured automatically
vmxnet3 R80.40+ 10 VMXNET Generation 3 driver from VMware® configured automatically

 

 

When Multi-Queue will not help

Tip 2

  • When most of the processing is done in CoreXL - either in the Medium path, or in the Firewall path (Slow path).
  • All current CoreXL FW instances are highly loaded, so there are no CPU cores that can be reassigned to SecureXL.
  • When IPS, or other deep inspection Software Blades are heavily used.
  • When all network interface cards are processing the same amount of traffic.
  • When all CPU cores that are currently used by SecureXL are congested.
  • When trying to increase traffic session rate.
  • When there is not enough diversity of traffic flows. In the extreme case of a single flow, for example, traffic will be handled only by a single CPU core. (Clarification: The more traffic is passing to/from different ports/IP addresses, the more you benefit from Multi-Queue. If there is a single traffic flow from a single Client to a single Server, then Multi-Queue will not help.)
Multi-Queue is recommended
  • Load on CPU cores that run as SND is high (idle < 20%).
  • Load on CPU cores that run CoreXL FW instances is low (idle > 50%).
  • There are no CPU cores left to be assigned to the SND by changing interface affinity.
Multi-Queue support on Appliance vs. Open Server

Gateway type

Network interfaces that support the Multi-Queue

Check Point Appliance

  • MQ is supported on all applications that use the following drivers igb, ixgbe, i40e, mlx5_core.
  • These expansion line cards for 4000, 12000, and 21000 appliances support the Multi-Queue:
    • CPAC-ACC-4-1C
    • CPAC-ACC-4-1F
    • CPAC-ACC-8-1C
    • CPAC-ACC-2-10F
    • CPAC-ACC-4-10F
  • This expansion line card for 5000, 13000, and 23000 appliances supports the Multi-Queue:
    • ·         CPAC-2-40F-B

Open Server

Network cards that use igb (1Gb), ixgbe (10Gb), i40e (40Gb), or mlx5_core (40Gb) drivers support the Multi-Queue.

 

 

Multi-Queue support on Open Server (Intel Network Cards)

Tip 3

 

The following list shows an overview with all Intel cards from Check Point HCL for open server from 11/21/2018.

 

The list is cross-referenced to the Intel drivers. I do not assume any liability for the correctness of the information. These lists should only be used to help you find the right drivers. It is not an official document of Check Point!

 

So please always read the official documents of Check Point.

 Intel network card

Ports

Chipset

PCI ID

Driver

PCI

Speed

MQ

10 Gigabit AT

1

82598EB

8086:25e7

ixgbe

PCI-E

10G

 Copper

yes

10 Gigabit CX4

2

82598EB

8086:10ec

ixgbe

PCI-E

10G

Copper

yes

10 Gigabit XF family (Dual and Single Port models, SR and LR)

2

82598

8086:10c6

Ixgbe

PCI-E

10G

Fiber

yes

Ethernet Converged Network Adapter X540-T2

2

X540

8086:1528

ixgbe

PCI-E

100/1G/10G
Copper

yes

Ethernet Server Adapter I340-T2

2

82580

-

Igb

PCI-E

10/100/1G
Copper

yes

Ethernet Server Adapter I340-T4

2

82580

-

Igb

PCI-E

10/100/1G

Copper

yes

Ethernet Server Adapter X520

X520-SR2, X520-SR1, X520-LR1, X520-DA2

2

X520

-

ixgbe

PCI-E

10G

Fiber

yes

Gigabit VT Quad Port Server Adapter

4

82575GB

8086:10d6

igb

PCI-E

10/100/1G

Copper

yes

Intel Gigabit ET2 Quad Port Server Adapter

4

 

-

igb

PCI-E

1G

Copper

yes

PRO/10GbE CX4

1

82597EX

8086:109e

Ixgb

PCI-X

10G

Copper

no

PRO/10GbE LR

1

82597EX

8086:1b48

Ixgb

PCI-X

10G

Fiber

no

PRO/10GbE SR

1

82597EX

8086:1a48

Ixgb

PCI-X

10G

Fiber

no

PRO/1000 Dual 82546GB

2

82546GB

8086:108a

E1000

PCI-E

10/100/1G

Copper

no

Pro/1000 EF Dual

2

82576

8086:10e6

Igb ?

PCI-E

1G

Fiber

yes ?

Pro/1000 ET Dual port Server Adapter

2

82576

 

igb

PCI-E

1G

Copper

yes

PRO/1000 ET Quad Port Server Adapter

4

82576

8086:10e8

Igb

PCI-E

10/100/1G

Copper

yes

PRO/1000 GT Quad

4

82546

8086:10b5

E1000

PCI-X

10/100/1G

Copper

no

PRO/1000 MF

1

82546 ?

82545 ?

-

E1000

PCI-X

1G

Fiber

no

PRO/1000 MF (LX)

1

82546 ?

82545 ?

-

E1000

PCI-X

1G

Fiber

no

PRO/1000 MF Dual

2

82546 ?

82545 ?

-

E1000

PCI-X

1G

Fiber

no

PRO/1000 MF Quad

4

82546 ?

82545 ?

-

E1000

PCI-X

1G

Fiber

no

PRO/1000 PF

1

82571 ?

8086:107e

E1000

PCI-E

1G

Fiber

no

PRO/1000 PF Dual

2

82571 ?

8086:115f

E1000

PCI-E

1G

Fiber

no

PRO/1000 PF Quad Port Server Adapter

4

82571 ?

8086:10a5

E1000

PCI-E

1G

Fiber

no

PRO/1000 PT

1

82571

8086:1082

E1000

PCI-E

10/100/1G

Copper

no

PRO/1000 PT Dual

2

82571

8086:105e

E1000

PCI-E

10/100/1G

Copper

no

PRO/1000 PT Dual UTP

2

82571

8086:108a

E1000

PCI-E

10/100/1G

Copper

no

PRO/1000 PT Quad

4

82571

8086:10a4

E1000

PCI-E

10/100/1G

Copper

no

PRO/1000 PT Quad Low Profile

4

82571

8086:10bc

E1000

PCI-E

10/100/1G

Copper

no

PRO/1000 XF

1

82544

 

E1000

PCI-X

1G

Fiber

no

 For all "?" I could not clarify the points exactly. 

Multi-Queue support on Open Server (HP and IBM Network Cards)

Tip 4

The following list shows an overview with all HP cards from Check Point HCL for open server from 11/22/2018.

 

The list is cross-referenced to the Intel drivers. I do not assume any liability for the correctness of the information. These lists should only be used to help you find the right drivers. It is not an official document of Check Point!

 

So please always read the official documents of Check Point.

HP network card

Ports

Chipset

PCI ID

Driver

PCI

Speed

MQ

Ethernet 1Gb 4-port 331T

4

BCM5719

14e4:1657

tg3

PCI-E

1G

Copper

no

Ethernet 1Gb 4-port 366FLR

4

Intel  I350

8086:1521

igb

PCI-E

1G

Copper

yes

Ethernet 1Gb 4-port 366T

4

Intel  I350

8086:1521

igb

PCI-E

1G

Copper

yes

Ethernet 10Gb 2-port 560SFP+

2

Intel 82599EB

0200: 8086:10fb

ixgbe

PCI-E

10G

Fiber

yes

Ethernet 10Gb 2-port 561FLR-T

2

Intel X540-AT2

8086:1528

ixgbe

PCI-E

10G

Copper

yes

HPE Ethernet 10Gb 2-port 562FLR-SFP+

2

Intel X710

8086:1572

i40e

PCI-E

10G

Copper

yes

Ethernet 10Gb 2-port 561T

2

Intel X540-AT2

8086:1528

ixgbe

PCI-E

10G

Copper

yes

NC110T

1

Intel 82572GI

8086:10b9

E1000

PCI-E

10/100/1G

Copper

no

NC320T

1

BCM5721 KFB

14e4:1659

tg3

PCI-E

10/100/1G

Copper

no

NC325m Quad Port

4

BCM5715S

14e4:1679

tg3

PCI-E

1G

Copper

no

NC326m PCI Express Dual Port 1Gb Server Adapter for c-Class Blade System

2

BCM5715S

 

tg3

PCI-E

1G

Copper

no

NC340T

4

Intel 82546GB

8086:10b5

E1000

PCI-X

10/100/1G

Copper

no

NC360T

2

Intel 82571EB

8086:105e

E1000

PCI-E

10/100/1G

Copper

no

NC364T Official site

4

Intel 82571EB

8086:10bc

E1000

PCI-E

10/100/1G

Copper

no

NC365T PCI Express Quad Port

4

Intel
82580

8086:150e

igb

PCI-E

10/100/1G

Copper

yes

NC373F

1

Broadcom

5708

14e4:16ac

bnx2

PCI-E

1G

Copper

no

NC373m Dual Port

2

BCM5708S

14e4:16ac

bnx2

PCI-E

10/100/1G

Copper

no

NC373T

1

Broadcom 5708

14e4:16ac

bnx2

PCI-E

10/100/1G

Copper

no

NC380T PCI Express Dual Port Multifunction Gigabit server

2

BCM5706

-

bnx2

PCI-E

10/100/1G

Copper

no

NC522SFP Dual Port 10GbE Server Adapter

2

NX3031

4040:0100

???

PCI-E

10G

Fiber

no

NC550SFP Dual Port 10GbE Server Adapter Official site

2

Emulex OneConn

19a2:0700

be2net

PCI-E

10G

Fiber

no

NC552SFP 10GbE 2-port Ethernet Server

2

Emulex OneConn

19a2:0710

be2net

PCI-E

10G

Fiber

no

NC7170

2

Intel  82546EB

8086:1010

E1000

PCI-X

10/100/1G

Copper

no

For all "?" I could not clarify the points exactly.  

IBM network card

Ports

Chipset

PCI ID

Driver

PCI

Speed

MQ

Broadcom 10Gb 4-Port Ethernet Expansion Card (CFFh) for IBM BladeCenter

4

BCM57710

 

bnx2x

PCI-E

10G

Fiber

no

Broadcom NetXtreme Quad Port GbE network Adapter

4

I350

 

igb

PCI-E

1G

Copper

yes

NetXuleme 1000T

1

??? (1)

 

???

PCI-X

10/100/1G

Copper

???

NetXuleme 1000T Dual

2

??? (1)

 

???

PCI-X

10/100/1G

Copper

???

PRO/1000 PT Dual Port Server Adapter

2

82571GB

 

E1000

PCI-E

10/100/1G

Copper

no

 (1) These network cards can't even be found at Goggle.

Notes to Intel igb and ixgbe driver

I used the LKDDb Database to identify the drivers. LKDDb is an attempt to build a comprensive database of hardware and protocols know by Linux kernels. The driver database includes numeric identifiers of hardware, the kernel configuration menu needed to build the driver and the driver filename. The database is build automagically from kernel sources, so it is very easy to have always the database updated. This was the basis of the cross-reverence between Check Point HCL and Intel drivers.

Link to LKDDb web database:
https://cateee.net/lkddb/web-lkddb/

Link to LKDDb database driver:

igb, ixgbe, i40e, mlx5_core    

Here you can find the following output for all drivers e.g. igb:

Numeric ID (from LKDDb) and names (from pci.ids) of recognized devices:

  • vendor: 8086 ("Intel Corporation"), device: 0438 ("DH8900CC Series Gigabit Network Connection")
  • vendor: 8086 ("Intel Corporation"), device: 10a9 ("82575EB Gigabit Backplane Connection")
  • vendor: 8086 ("Intel Corporation"), device: 10c9 ("82576 Gigabit Network Connection")
  • vendor: 8086 ("Intel Corporation"), device: 10d6 ("82575GB Gigabit Network Connection")

and many more...

How to recognize the driver

With the ethtool you can display the version and type of the driver. For example for the interface eth0.

# ethtool -i eth0

driver: igb
version: 2.1.0-k2
firmware-version: 3.2-9
bus-info: 0000:02:00.0

Active RX multi queues - formula

By default, Security Gateway calculates the number of active RX queues based on this formula:

RX queues = [Total Number of CPU cores] - [Number of CoreXL FW instances]

Configure

Here I would refer to the following links:

Performance Tuning R80.10 Administratio Guide
Performance Tuning R80.20 Administration Guide

References

Best Practices - Security Gateway Performance 
Multi-Queue does not work on 3200 / 5000 / 15000 / 23000 appliances when it is enabled for on-board ...
Performance Tuning R80.10 Administratio Guide
Performance Tuning R80.20 Administration Guide

Intel:
Download Intel® Network Adapter Virtual Function Driver for Intel® 10 Gigabit Ethernet Network Conne... 
Download Network Adapter Driver for Gigabit PCI Based Network Connections for Linux* 
Download Intel® Network Adapter Driver for 82575/6, 82580, I350, and I210/211-Based Gigabit Network ... 

LKDDb (Linux Kernel Driver Database):
https://cateee.net/lkddb/web-lkddb/

 

➜ CCSM Elite, CCME, CCTE
(2)
Who rated this post