Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Participant

Multiqueue configured with 4 queues, but only 2 queues associated to CPU cores

System with 8 Core CPU.

Forced 4 cores for CoreXL (via cpconfig).

This leaves 4 cores for multiqueue (default value).

Only 1 interface has multiqueue enabled.

However we only see 2 queues associated to the interface (and 2 cores).

NIC is a 82580, and uses igb driver.

Open Server with R80.20 Kernel 3.10.

driver: igb
version: 5.3.5.18
firmware-version: 3.29, 0x8000027a

Any hint why only 2 queues associated to multiqueue interface?

0 Kudos
Reply
8 Replies
Highlighted
Champion
Champion

Not supporting more than two queues is a known limitation of some NIC hardware models; from page 395 of my book:

Spoiler
For the 3000/5000/15000/23000 series of firewall appliances, there is an additional
limitation for the on-board (built-in) NICs; they utilize the igb driver yet due to a NIC
hardware limitation, can only have a maximum of two parallel queues per interface. As
such, it is recommended to avoid using the onboard NICs on these appliances to handle
heavy traffic loads that might require Multi-Queue functionality. For more information
see sk114625: Multi-Queue does not work on 3200 / 5000 / 15000 / 23000 appliances when it is enabled for....
Gaia 3.10 Immersion Self-paced Video Series
now available at http://www.maxpowerfirewalls.com
0 Kudos
Reply
Highlighted
Participant

It shouldn't apply, since we are talking of Open Server HP.

And the same NIC has 4 queues on one interface in another HP server with R80.10.

 

0 Kudos
Reply
Champion
Champion

According to Intel the 82580 supports up to 8 TX and RX queues.  Sounds to me like the number of queues is getting improperly capped for this NIC hardware in R80.20.  Keep in mind that if this system/NIC is not on the HCL you will be facing an uphill battle here, but it is time to engage with TAC so they can look at why the number of queues is getting capped at 2.  Be sure to mention sk114625 as it certainly seems to be related.  Also which kernel are you using, 2.6.18 or 3.10?

Gaia 3.10 Immersion Self-paced Video Series
now available at http://www.maxpowerfirewalls.com
Highlighted
Participant

I'm using Kernel 3.10.

0 Kudos
Reply
Highlighted
Champion
Champion

As I suspected, since your igb driver version (5.3.5.18) is much newer than the one in the 2.6.18 kernel (igb 4.1.2).  This capping could certainly be an oversight based on the new igb driver version for kernel 3.10, definitely time involve TAC.   For more info see this thread:  https://community.checkpoint.com/t5/General-Topics/Open-Server-HCL-for-multi-queue-network-cards/m-p...

 

Gaia 3.10 Immersion Self-paced Video Series
now available at http://www.maxpowerfirewalls.com
0 Kudos
Reply
Highlighted
Participant

I've open a SR.

Based on the thread provided, there is only one board for Gaia 3.10 in HCL, which is an HP 10G, no 1G NICs.

0 Kudos
Reply
Highlighted

0 Kudos
Reply
Highlighted
Participant

Engaged with TAC.

Tested several configurations, but after GW reboot, only 2 queues available, even if 4 configured.

As 1G boards are not on HCL for Gaia3.10, no solution could be found with multi-queue.

Alternative is to go for 10G fiber NIC (there is only one in HCL for Gaia 3.10), or do a bond with 4 interfaces, and do manual associations between CPU Core and each physical interface of the bond.

Or wait for a 1G board to be certified by Chechpoint.