Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
HeikoAnkenbrand
Champion Champion
Champion

R80.x Performance Tuning Tip – Multi Queue

What is Multi Queue?

 

It is an acceleration feature that lets you assign more than one packet queue and CPU to an interface.

When most of the traffic is accelerated by the SecureXL, the CPU load from the CoreXL SND instances can be very high, while the CPU load from the CoreXL FW instances can be very low. This is an inefficient utilization of CPU capacity.

By default, the number of CPU cores allocated to CoreXL SND instances is limited by the number of network interfaces that handle the traffic. Because each interface has one traffic queue, only one CPU core can handle each traffic queue at a time. This means that each CoreXL SND instance can use only one CPU core at a time for each network interface.

Check Point Multi-Queue lets you configure more than one traffic queue for each network interface. For each interface, you can use more than one CPU core (that runs CoreXL SND) for traffic acceleration. This balances the load efficiently between the CPU cores that run the CoreXL SND instances and the CPU cores that run CoreXL FW. Since R81 Multi Queue is enabled by default on all supported interfaces.

Important:
-
Multi-Queue applies only if SecureXL is enabled.
- Since R81 Multi Queue is enabled by default on all supported interfaces.

Chapter

More interesting articles:

- R80.x Architecture and Performance Tuning - Link Collection

Multi-Queue Requirements and Limitations

Tip 1

R80.10 to R80.40:

  • Multi-Queue is not supported on computers with one CPU core.
  • Network interfaces must use the driver that supports Multi-Queue. Only network cards that use the igb (1Gb), ixgbe (10Gb), i40e (40Gb), or mlx5_core (40Gb) drivers support the Multi-Queue.
  • You can configure a maximum of five interfaces with Multi-Queue.
  • You must reboot the Security Gateway after all changes in the Multi-Queue configuration.
  •       For best performance, it is not recommended to assign both SND and a CoreXL FW instance to the same CPU core.
  • Do not change the IRQ affinity of queues manually. Changing the IRQ affinity of the queues manually can adversely affect performance.
  • Multi-Queue is relevant only if SecureXL and CoreXL is enabled.
  • Do not change the IRQ affinity of queues manually. Changing the IRQ affinity of the queues manually can adversely affect performance.
  • You cannot use the “sim affinity” or the  “fw ctl affinity” commands to change and query the IRQ affinity of the Multi-Queue interfaces.
  • Since R80.30 with kernel 3.10 and with R80.40 MQ is supported with vmxnet3 drivers
  • The number of queues is limited by the number of CPU cores and the type of interface driver:

Network card driver

Speed

Maximal number of RX queues

igb

1 Gb

4

ixgbe

10 Gb

16

i40e

40 Gb

14

mlx5_core

40 Gb

10

vmxnet3 *

10Gb

8

              * Since R80.30 with kernel 3.10 and with R80.40+

  •      The maximum RX queues limit dictates the largest number of SND/IRQ instances that can empty packet buffers for an individual interface using that driver that has Multi-Queue enabled. 
  • Multi-Queue does not work on 3200 / 5000 / 15000 / 23000 appliances in the following scenario (sk114625)
  • The number of traffic queues is limited by the number of CPU cores and the type of network interface card driver.
    The on-board interfaces on these appliances use the igb driver, which supports up to 4 RX queues.
    However, the I211 controller on these on-board interfaces supports only up to 2 RX queues.

R81 Multi Queue news:

- Multi Queue is now fully automated:
      - Multi Queue is enabled by default on all supported interfaces.
      - The number of queues on each interface is determined automatically, based on the number of available CPUs (SNDs) and the NIC/driver limitations.
      - Queues are automatically affined to the SND cores.
- Multi Queue configuration does not require a reboot in order to be applied.
- Multi Queue is now managed by the Clish command line.
- Multi Queue is now managed by the Out-of-the-box experience performance tool.

New supported Multi Queue drivers:

Driver GAIA version Speed [Gbps] Description Maximal Number of RX Queues
igb R80.10+ 1 Intel® PCIe 1 Gbps 2-16 (depends on the interface)
ixgbe R80.10+ 10 Intel® PCIe 10 Gbps 16
i40e R80.10+ 40 Intel® PCIe 40 Gbps 64
i40evf R81 40 Intel® i40e 40 Gbps 4
mlx5_core R80.10+ 40 Mellanox® ConnectX® mlx5 core driver 60
ena R81 20 Elastic Network Adapter in Amazon® EC2 configured automatically
virtio_net R81 10 VirtIO paravirtualized device driver from KVM® configured automatically
vmxnet3 R80.40+ 10 VMXNET Generation 3 driver from VMware® configured automatically

 

 

When Multi-Queue will not help

Tip 2

  • When most of the processing is done in CoreXL - either in the Medium path, or in the Firewall path (Slow path).
  • All current CoreXL FW instances are highly loaded, so there are no CPU cores that can be reassigned to SecureXL.
  • When IPS, or other deep inspection Software Blades are heavily used.
  • When all network interface cards are processing the same amount of traffic.
  • When all CPU cores that are currently used by SecureXL are congested.
  • When trying to increase traffic session rate.
  • When there is not enough diversity of traffic flows. In the extreme case of a single flow, for example, traffic will be handled only by a single CPU core. (Clarification: The more traffic is passing to/from different ports/IP addresses, the more you benefit from Multi-Queue. If there is a single traffic flow from a single Client to a single Server, then Multi-Queue will not help.)
Multi-Queue is recommended
  • Load on CPU cores that run as SND is high (idle < 20%).
  • Load on CPU cores that run CoreXL FW instances is low (idle > 50%).
  • There are no CPU cores left to be assigned to the SND by changing interface affinity.
Multi-Queue support on Appliance vs. Open Server

Gateway type

Network interfaces that support the Multi-Queue

Check Point Appliance

  • MQ is supported on all applications that use the following drivers igb, ixgbe, i40e, mlx5_core.
  • These expansion line cards for 4000, 12000, and 21000 appliances support the Multi-Queue:
    • CPAC-ACC-4-1C
    • CPAC-ACC-4-1F
    • CPAC-ACC-8-1C
    • CPAC-ACC-2-10F
    • CPAC-ACC-4-10F
  • This expansion line card for 5000, 13000, and 23000 appliances supports the Multi-Queue:
    • ·         CPAC-2-40F-B

Open Server

Network cards that use igb (1Gb), ixgbe (10Gb), i40e (40Gb), or mlx5_core (40Gb) drivers support the Multi-Queue.

 

 

Multi-Queue support on Open Server (Intel Network Cards)

Tip 3

 

The following list shows an overview with all Intel cards from Check Point HCL for open server from 11/21/2018.

 

The list is cross-referenced to the Intel drivers. I do not assume any liability for the correctness of the information. These lists should only be used to help you find the right drivers. It is not an official document of Check Point!

 

So please always read the official documents of Check Point.

 Intel network card

Ports

Chipset

PCI ID

Driver

PCI

Speed

MQ

10 Gigabit AT

1

82598EB

8086:25e7

ixgbe

PCI-E

10G

 Copper

yes

10 Gigabit CX4

2

82598EB

8086:10ec

ixgbe

PCI-E

10G

Copper

yes

10 Gigabit XF family (Dual and Single Port models, SR and LR)

2

82598

8086:10c6

Ixgbe

PCI-E

10G

Fiber

yes

Ethernet Converged Network Adapter X540-T2

2

X540

8086:1528

ixgbe

PCI-E

100/1G/10G
Copper

yes

Ethernet Server Adapter I340-T2

2

82580

-

Igb

PCI-E

10/100/1G
Copper

yes

Ethernet Server Adapter I340-T4

2

82580

-

Igb

PCI-E

10/100/1G

Copper

yes

Ethernet Server Adapter X520

X520-SR2, X520-SR1, X520-LR1, X520-DA2

2

X520

-

ixgbe

PCI-E

10G

Fiber

yes

Gigabit VT Quad Port Server Adapter

4

82575GB

8086:10d6

igb

PCI-E

10/100/1G

Copper

yes

Intel Gigabit ET2 Quad Port Server Adapter

4

 

-

igb

PCI-E

1G

Copper

yes

PRO/10GbE CX4

1

82597EX

8086:109e

Ixgb

PCI-X

10G

Copper

no

PRO/10GbE LR

1

82597EX

8086:1b48

Ixgb

PCI-X

10G

Fiber

no

PRO/10GbE SR

1

82597EX

8086:1a48

Ixgb

PCI-X

10G

Fiber

no

PRO/1000 Dual 82546GB

2

82546GB

8086:108a

E1000

PCI-E

10/100/1G

Copper

no

Pro/1000 EF Dual

2

82576

8086:10e6

Igb ?

PCI-E

1G

Fiber

yes ?

Pro/1000 ET Dual port Server Adapter

2

82576

 

igb

PCI-E

1G

Copper

yes

PRO/1000 ET Quad Port Server Adapter

4

82576

8086:10e8

Igb

PCI-E

10/100/1G

Copper

yes

PRO/1000 GT Quad

4

82546

8086:10b5

E1000

PCI-X

10/100/1G

Copper

no

PRO/1000 MF

1

82546 ?

82545 ?

-

E1000

PCI-X

1G

Fiber

no

PRO/1000 MF (LX)

1

82546 ?

82545 ?

-

E1000

PCI-X

1G

Fiber

no

PRO/1000 MF Dual

2

82546 ?

82545 ?

-

E1000

PCI-X

1G

Fiber

no

PRO/1000 MF Quad

4

82546 ?

82545 ?

-

E1000

PCI-X

1G

Fiber

no

PRO/1000 PF

1

82571 ?

8086:107e

E1000

PCI-E

1G

Fiber

no

PRO/1000 PF Dual

2

82571 ?

8086:115f

E1000

PCI-E

1G

Fiber

no

PRO/1000 PF Quad Port Server Adapter

4

82571 ?

8086:10a5

E1000

PCI-E

1G

Fiber

no

PRO/1000 PT

1

82571

8086:1082

E1000

PCI-E

10/100/1G

Copper

no

PRO/1000 PT Dual

2

82571

8086:105e

E1000

PCI-E

10/100/1G

Copper

no

PRO/1000 PT Dual UTP

2

82571

8086:108a

E1000

PCI-E

10/100/1G

Copper

no

PRO/1000 PT Quad

4

82571

8086:10a4

E1000

PCI-E

10/100/1G

Copper

no

PRO/1000 PT Quad Low Profile

4

82571

8086:10bc

E1000

PCI-E

10/100/1G

Copper

no

PRO/1000 XF

1

82544

 

E1000

PCI-X

1G

Fiber

no

 For all "?" I could not clarify the points exactly. 

Multi-Queue support on Open Server (HP and IBM Network Cards)

Tip 4

The following list shows an overview with all HP cards from Check Point HCL for open server from 11/22/2018.

 

The list is cross-referenced to the Intel drivers. I do not assume any liability for the correctness of the information. These lists should only be used to help you find the right drivers. It is not an official document of Check Point!

 

So please always read the official documents of Check Point.

HP network card

Ports

Chipset

PCI ID

Driver

PCI

Speed

MQ

Ethernet 1Gb 4-port 331T

4

BCM5719

14e4:1657

tg3

PCI-E

1G

Copper

no

Ethernet 1Gb 4-port 366FLR

4

Intel  I350

8086:1521

igb

PCI-E

1G

Copper

yes

Ethernet 1Gb 4-port 366T

4

Intel  I350

8086:1521

igb

PCI-E

1G

Copper

yes

Ethernet 10Gb 2-port 560SFP+

2

Intel 82599EB

0200: 8086:10fb

ixgbe

PCI-E

10G

Fiber

yes

Ethernet 10Gb 2-port 561FLR-T

2

Intel X540-AT2

8086:1528

ixgbe

PCI-E

10G

Copper

yes

HPE Ethernet 10Gb 2-port 562FLR-SFP+

2

Intel X710

8086:1572

i40e

PCI-E

10G

Copper

yes

Ethernet 10Gb 2-port 561T

2

Intel X540-AT2

8086:1528

ixgbe

PCI-E

10G

Copper

yes

NC110T

1

Intel 82572GI

8086:10b9

E1000

PCI-E

10/100/1G

Copper

no

NC320T

1

BCM5721 KFB

14e4:1659

tg3

PCI-E

10/100/1G

Copper

no

NC325m Quad Port

4

BCM5715S

14e4:1679

tg3

PCI-E

1G

Copper

no

NC326m PCI Express Dual Port 1Gb Server Adapter for c-Class Blade System

2

BCM5715S

 

tg3

PCI-E

1G

Copper

no

NC340T

4

Intel 82546GB

8086:10b5

E1000

PCI-X

10/100/1G

Copper

no

NC360T

2

Intel 82571EB

8086:105e

E1000

PCI-E

10/100/1G

Copper

no

NC364T Official site

4

Intel 82571EB

8086:10bc

E1000

PCI-E

10/100/1G

Copper

no

NC365T PCI Express Quad Port

4

Intel
82580

8086:150e

igb

PCI-E

10/100/1G

Copper

yes

NC373F

1

Broadcom

5708

14e4:16ac

bnx2

PCI-E

1G

Copper

no

NC373m Dual Port

2

BCM5708S

14e4:16ac

bnx2

PCI-E

10/100/1G

Copper

no

NC373T

1

Broadcom 5708

14e4:16ac

bnx2

PCI-E

10/100/1G

Copper

no

NC380T PCI Express Dual Port Multifunction Gigabit server

2

BCM5706

-

bnx2

PCI-E

10/100/1G

Copper

no

NC522SFP Dual Port 10GbE Server Adapter

2

NX3031

4040:0100

???

PCI-E

10G

Fiber

no

NC550SFP Dual Port 10GbE Server Adapter Official site

2

Emulex OneConn

19a2:0700

be2net

PCI-E

10G

Fiber

no

NC552SFP 10GbE 2-port Ethernet Server

2

Emulex OneConn

19a2:0710

be2net

PCI-E

10G

Fiber

no

NC7170

2

Intel  82546EB

8086:1010

E1000

PCI-X

10/100/1G

Copper

no

For all "?" I could not clarify the points exactly.  

IBM network card

Ports

Chipset

PCI ID

Driver

PCI

Speed

MQ

Broadcom 10Gb 4-Port Ethernet Expansion Card (CFFh) for IBM BladeCenter

4

BCM57710

 

bnx2x

PCI-E

10G

Fiber

no

Broadcom NetXtreme Quad Port GbE network Adapter

4

I350

 

igb

PCI-E

1G

Copper

yes

NetXuleme 1000T

1

??? (1)

 

???

PCI-X

10/100/1G

Copper

???

NetXuleme 1000T Dual

2

??? (1)

 

???

PCI-X

10/100/1G

Copper

???

PRO/1000 PT Dual Port Server Adapter

2

82571GB

 

E1000

PCI-E

10/100/1G

Copper

no

 (1) These network cards can't even be found at Goggle.

Notes to Intel igb and ixgbe driver

I used the LKDDb Database to identify the drivers. LKDDb is an attempt to build a comprensive database of hardware and protocols know by Linux kernels. The driver database includes numeric identifiers of hardware, the kernel configuration menu needed to build the driver and the driver filename. The database is build automagically from kernel sources, so it is very easy to have always the database updated. This was the basis of the cross-reverence between Check Point HCL and Intel drivers.

Link to LKDDb web database:
https://cateee.net/lkddb/web-lkddb/

Link to LKDDb database driver:

igb, ixgbe, i40e, mlx5_core    

Here you can find the following output for all drivers e.g. igb:

Numeric ID (from LKDDb) and names (from pci.ids) of recognized devices:

  • vendor: 8086 ("Intel Corporation"), device: 0438 ("DH8900CC Series Gigabit Network Connection")
  • vendor: 8086 ("Intel Corporation"), device: 10a9 ("82575EB Gigabit Backplane Connection")
  • vendor: 8086 ("Intel Corporation"), device: 10c9 ("82576 Gigabit Network Connection")
  • vendor: 8086 ("Intel Corporation"), device: 10d6 ("82575GB Gigabit Network Connection")

and many more...

How to recognize the driver

With the ethtool you can display the version and type of the driver. For example for the interface eth0.

# ethtool -i eth0

driver: igb
version: 2.1.0-k2
firmware-version: 3.2-9
bus-info: 0000:02:00.0

Active RX multi queues - formula

By default, Security Gateway calculates the number of active RX queues based on this formula:

RX queues = [Total Number of CPU cores] - [Number of CoreXL FW instances]

Configure

Here I would refer to the following links:

Performance Tuning R80.10 Administratio Guide
Performance Tuning R80.20 Administration Guide

References

Best Practices - Security Gateway Performance 
Multi-Queue does not work on 3200 / 5000 / 15000 / 23000 appliances when it is enabled for on-board ...
Performance Tuning R80.10 Administratio Guide
Performance Tuning R80.20 Administration Guide

Intel:
Download Intel® Network Adapter Virtual Function Driver for Intel® 10 Gigabit Ethernet Network Conne... 
Download Network Adapter Driver for Gigabit PCI Based Network Connections for Linux* 
Download Intel® Network Adapter Driver for 82575/6, 82580, I350, and I210/211-Based Gigabit Network ... 

LKDDb (Linux Kernel Driver Database):
https://cateee.net/lkddb/web-lkddb/

 

➜ CCSM Elite, CCME, CCTE ➜ www.checkpoint.tips
(2)
21 Replies
PhoneBoy
Admin
Admin

That explains why you wanted the information about NICs Smiley Happy

HeikoAnkenbrand
Champion Champion
Champion

Hi Dameon,

this was not planned. Smiley Happy

As I said I once referenced all network cards for 3 hours between Check Point HCL and Intel.

After that I wrote this article. I will complete and revise the article in the next few days.

Here is the link to the original discussion:

Open Server - HCL for multi queue network cards 

Regards

Heiko

➜ CCSM Elite, CCME, CCTE ➜ www.checkpoint.tips
_Val_
Admin
Admin

well done Heiko Ankenbrand

HeikoAnkenbrand
Champion Champion
Champion

I noticed the following. The following drivers and network cards are not yet supported by Check Ponit Open Server HCL.

Is there a reason that 40Gbit/s network cards are not yet present in the HCL?

I think from a performance point of view, an open server with MQ should also be able to handle that.

What are the reasons why that are not supported?

Actual NICs supported by this driver i40e version according to Intel:

 

    Intel® Ethernet Controller X710-AM2
    Intel® Ethernet Controller X710-BM2
    Intel® Ethernet Controller XL710-AM1
    Intel® Ethernet Controller XL710-AM2
    Intel® Ethernet Controller XL710-BM1
    Intel® Ethernet Controller XL710-BM2
    Intel® Ethernet Converged Network Adapter X710-DA2
    Intel® Ethernet Converged Network Adapter X710-DA4
    Intel® Ethernet Converged Network Adapter XL710-QDA1
    Intel® Ethernet Converged Network Adapter XL710-QDA2

Regards

Heiko

➜ CCSM Elite, CCME, CCTE ➜ www.checkpoint.tips
Rolf_Kaschek
Contributor

nice list

christian_konne
Participant

Yes! The Intel Open Server network card list is very nice.

Thanks

Christian

HeikoAnkenbrand
Champion Champion
Champion

And again 3 hours of driver search and referencing for HP network cards.Smiley Happy

Regards

Heiko

➜ CCSM Elite, CCME, CCTE ➜ www.checkpoint.tips
Harry_Morgan
Contributor

Hi Heiko Ankenbrand,

First of all!  Thank you very much for the information.

But I have one more question! How did you cross-reference the network cards?

What sources did you use?

Regards

Harry

_Val_
Admin
Admin

Important clarification

We appreciate Heiko Ankenbrand‌'s article on multi-queue and research. However, some points need to be mentioned:

  1. The list above is not an official reference from Check Point Software Technologies
  2. When choosing an Open Server and NICs for it, always consult with HCL: Compatible Hardware Archive | Check Point Software and make sure a combination of Open Server platform and network cards you are going to use is mentioned for your specific software version as supported
  3. As we are constantly expanding multi-queue support, consult with the software release notes and relevant SK articles before enabling the feature.
  4. In case of any doubt we encourage you to consult Check Point software Technolgies through official channes: TAC and/or your local CP office.

Once more, thanks to the author for his contribution to this community. Keep up the good work.

HeikoAnkenbrand
Champion Champion
Champion

Here I agree with Valeri Loukine. You should always look at the HCL and relevant SK's, readme and documentation. I was just trying to reference the HCL to the network card drivers. 

Thank you Valeri, I have included the following in the article.

CUT>>>

The list is cross-referenced to the Intel drivers. I do not assume any liability for the correctness of the information. These lists should only be used to help you find the right drivers. It is not an official document of Check Point!

 

So please always read the official documents of Check Point.

<<<CUT

Regards

Heiko

➜ CCSM Elite, CCME, CCTE ➜ www.checkpoint.tips
HeikoAnkenbrand
Champion Champion
Champion

I used the LKDDb Database to identify the drivers. LKDDb is an attempt to build a comprensive database of hardware and protocols know by Linux kernels. The driver database includes numeric identifiers of hardware, the kernel configuration menu needed to build the driver and the driver filename. The database is build automagically from kernel sources, so it is very easy to have always the database updated. This was the basis of the cross-reverence between Check Point HCL and Intel drivers.

 

Link to LKDDb web database:
https://cateee.net/lkddb/web-lkddb/

 

Link to LKDDb database driver:

igb, ixgbe, i40e, mlx5_core    

➜ CCSM Elite, CCME, CCTE ➜ www.checkpoint.tips
Ukko_Metsola
Participant

Is a very interesting Weblink LKDDb web database:
https://cateee.net/lkddb/web-lkddb/

0 Kudos
Nabeel_Saeed
Explorer

Thx för this article.

testnetx
Explorer

tg3 seems to support MQ on linux kernels >=3:

02:00.1 Ethernet controller: Broadcom Corporation NetXtreme BCM5720 Gigabit Ethernet PCIe

root@debian:~# ethtool -i eth0
driver: tg3
version: 3.137
firmware-version: FFV7.6.14 bc 5720-v1.31
bus-info: 0000:02:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: no

root@debian:~# ethtool -l eth0
Channel parameters for eth0:
Pre-set maximums:
RX: 4
TX: 4
Other: 0
Combined: 0
Current hardware settings:
RX: 4
TX: 4
Other: 0
Combined: 0

 

0 Kudos
Timothy_Hall
Legend Legend
Legend

That would seem to be the case based on your ethtool output, but based on my very negative experiences in the past with Broadcom NICs I wouldn't trust them to carry any production traffic whatsoever, let alone a heavy volume of traffic that would require use of Multi-Queue.  No thanks.

 

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
0 Kudos
testnetx
Explorer

Please note that I don't suggest anyone to use Broadcom adapters and my post was only for an informative purpose, because this is the only good article on the Internet where people can get information on multi-queue supported devices and drivers. It would be good if people shared some information about different NIC's not described here.

Today I have tested the mlx5_core driver with Mellanox MCX4121A-XCAT ConnectX-4 Lx EN (dual 10G) and turns out it supports more than 10 RX queues, stated as 'Maximal number of RX queues' in this article:

# ethtool -i enp3s0f0
driver: mlx5_core
version: 3.0-1 (January 2015)
firmware-version: 14.25.1020
# ethtool -l enp3s0f0
Channel parameters for enp3s0f0:
Pre-set maximums:
RX: 0
TX: 0
Other: 0
Combined: 24
Current hardware settings:
RX: 0
TX: 0
Other: 0
Combined: 24

 

 

Magnus-Holmberg
Advisor
Advisor

Hi,


we have been running the 25G nic for the last months in R80.30 3.10 customer release and now the normal GA version
If you want to add to the list 🙂

Intel(R) Ethernet Controller XXV710 for 25GbE SFP28
HPE Eth 10/25Gb 2p 661SFP28 Adptr
870825-B21

ethtool -i eth4
driver: i40e
version: 2.7.12
firmware-version: 6.00 0x800036cb 1.1747.0
expansion-rom-version:
bus-info: 0000:12:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes


 

ethtool eth4
Settings for eth4:
Supported ports: [ FIBRE ]
Supported link modes: 25000baseCR/Full
Supported pause frame use: Symmetric
Supports auto-negotiation: Yes
Supported FEC modes: Not reported
Advertised link modes: 25000baseCR/Full
Advertised pause frame use: No
Advertised auto-negotiation: Yes
Advertised FEC modes: Not reported
Speed: 25000Mb/s
Duplex: Full
Port: Direct Attach Copper
PHYAD: 0
Transceiver: internal
Auto-negotiation: off
Supports Wake-on: d
Wake-on: d
Current message level: 0x0000000f (15)
drv probe link timer
Link detected: yes

ethtool -l eth4
Channel parameters for eth4:
Pre-set maximums:
RX: 0
TX: 0
Other: 1
Combined: 8

https://www.youtube.com/c/MagnusHolmberg-NetSec
Eduard_Mammitzs
Participant

Quick question,

are all appliance ports compatible with multi queueing?

0 Kudos
Timothy_Hall
Legend Legend
Legend

Generally yes as long as it is an adapter using a supported driver (determined with ethtool -i (interface)).  Supported drivers are:

Intel: igb (4 queues maximum), ixgbe (16 queues maximum) & i40e (48 queues maximum)

Mellanox: mlx5_core (Gaia 2.6.18 - 10 queues maximum, Gaia 3.10 - 48 queues maximum), mlx_compat (Gaia 2.6.18 - 10 queues maximum, Gaia 3.10 - 48 queues maximum)

Note that the driver (and possibly kernel version) will usually limit the maximum number of queues per interface, but there are some exceptions such as the onboard I211 interfaces that use the igb driver, yet the NIC hardware only supports 2 queues.  See sk114625 for more details.

The limit of no more than 5 interfaces that can have Multi-Queue enabled simultaneously is a Gaia 2.6.18 kernel limitation, and has been lifted in Gaia 3.10.

There have also been some observations that newer Broadcom adapters (which are used on some open hardware firewalls) appear to support Multi-Queue, but I wouldn't trust Broadcom NIC hardware to perform even the most basic function (such as being violently smashed with a hammer) with any degree of stability or competence, let alone implement a feature like Multi-Queue.

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
0 Kudos
Joshua
Contributor

Hello Heiko,

 

I'm currently working on making a complete list of all network cards that support Multi-Queue. 

The only information I found was about expansion line cards. Does this mean there are no onboard cards that support Multi-Queue?

 

Thanks

Joshua.

0 Kudos
HeikoAnkenbrand
Champion Champion
Champion

Update:

Since R80.30 with kernel 3.10 and with R80.40 MQ is supported with vmxnet3 drivers.

➜ CCSM Elite, CCME, CCTE ➜ www.checkpoint.tips

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events