Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Kaspars_Zibarts
Employee Employee
Employee

Unable to set MQ on more than 10 cores R80.30 T215 2.16 kernel

I was trying to add two more MQ cores to my 23800 VSX cluster (10>12) but after increasing ixgbe core count and reboot, one of 5 interfaces that had MQ configured before would remain on "Pending ON" state

 

cpmq set rx_num ixgbe 12

 

 

 

[Expert@vsx:0]# cpmq get -a

Active ixgbe interfaces:
eth2-01 [On]
eth2-02 [Off]
eth2-03 [On]
eth2-04 [Pending On]
eth3-01 [On]
eth3-02 [On]

Non-Active ixgbe interfaces:
eth4-01 [Off]
eth4-02 [Off]
eth4-03 [Off]
eth4-04 [Off]

 

 

even it says clearly that ixgbe should support 16 cores in the performance tuning admin guide

image.png

 

But then from command line it shows 10! 

 

[Expert@vsx:0]# cpmq get rx_num ixgbe
The rx_num for ixgbe is: 10

 

 

Ouch! Anyone knows if I can get more than 10 on 2.16 kernel R80.30? Is it a bug or wrong documentation

Just noticed that there is override option, has anyone tried it?

2020-10-07_21-08-13.jpg

 

0 Kudos
4 Replies
Kaspars_Zibarts
Employee Employee
Employee

OK, -f did not help so I suspect since this is non HT system with total of 24 cores there must be some calculation in place that limits max cores for ixgbe MQ to 10..

0 Kudos
Timothy_Hall
Legend Legend
Legend

The driver-based max limit for ixgbe is 16 queues, but Multi-Queue is partially implemented by the underlying NIC hardware which can have lower limits.  As an example the igb driver theoretically supports up to 8 queues, but if the underlying NIC hardware is I211 (onboard interfaces for 3200,  5000, 6500, 15000, 23000) it can only have a maximum of two parallel queues per interface.  I suspect this is why you can't go beyond 10 queues in your case and attempting to go higher fails, which leads to the persistent "Pending On" state.

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
0 Kudos
PhoneBoy
Admin
Admin

Believe the older Linux kernel supports a maximum of 5 interfaces with multiqueue.

0 Kudos
Kaspars_Zibarts
Employee Employee
Employee

Yap, that's correct, but I wanted to add a bit more horse power to SND on those five as we were suffering from voice quality in teams due to very bursty nature of traffic. CPU (60%) and interface utilisation (25%) all looked good but we still saw 0.18% RX-DRP on some of the interfaces. Increased RX ring buffers to MAX and it helped a little initially. But then as Tim has been suggesting back in March adding more SND cores may help even though they do not report 100% utilisation. So we went from 8 > 10 MQ cores first and that showed massive improvement in RX-DRP.  Since I was able to free up more cores in VSX, I wanted to go all the way to 12 but run to this limitation. So next step is 3.10 which was not available back in March for 23800 appliances 🙂

My old reference https://community.checkpoint.com/t5/VSX/RX-DRP-rx-missed-errors-on-VSX-23800-R80-30-with-mltiqueue/t...

 

0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events