Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Valeriu_Cioara1
Explorer

"out of the box performance tool"

Hello Check Point Gurus.

Has anyone heard about (or used) the "out of the box performance tool" that exists in the R80.30  code based on the 3.10 kernel as well as R80.40? There is a single reference to this in the whole SK database, sk153373, dealing with the automatic management of interfaces multi-queue associations...

I hit a problem with  a new 16200 cluster deployed where one of the 10G interfaces is a production interface, carrying a lot of traffic but is also defined as the Management interface fro the appliance... As such (being the management interface) that is left out of the multi-queue configuration and gets a single SND core assigned. That core gets into 70 - 80 % usage with 150 remote users tunneling into the cluster... The end-user plan was to have 2000 concurrent users... Without getting multi-queue enabled on that that interface this will be impossible to achieve...

Any hints / ideas will be much appreciated...

Thanks,

Valeriu

 

 

0 Kudos
6 Replies
PhoneBoy
Admin
Admin

Which version are you actually using?

0 Kudos
Valeriu_Cioara1
Explorer

R80.40 With JHF 65 installed on top...

 

0 Kudos
G_W_Albrecht
Legend
Legend

why not change the Management interface ? IN WebGUI or like in sk108333: "NMSETH0029 Management interface must have an IP address" error in Gaia Clish when trying ...

CCSE CCTE CCSM SMB Specialist
0 Kudos
Timothy_Hall
Champion
Champion

My interpretation of the "out-of-the-box performance tool" is that it is the replacement for Automatic Interface Affinity that was employed in prior versions. 

Under the old scheme, Automatic Interface Affinity would check network interface utilization every 60 seconds and potentially shift SoftIRQ processing around on the different SND/IRQ cores in an attempt to keep them roughly balanced.  But only one SND/IRQ core could empty a single interface's ring buffer via SoftIRQ unless Multi-Queue was manually enabled by an administrator, and no more than 5 physical interfaces could have Multi-Queue enabled at a time.  There were also various driver-based queue limits that kept all SND/IRQ cores from being able to empty a single interface's ring buffer, even if Multi-Queue was enabled for that interface.

When Gaia 3.10 is in use on a gateway (some R80.30 or R80.40), the out-of-the-box performance tool automatically enables Multi-Queue on all interfaces except the management interface and the 5 interface limit is no longer present; the various driver-based queue limits were also substantially increased for some types of interfaces.  The expert mode mq_mng command is used to query and configure this new tool, although in R80.40 there have been new clish commands added for managing Multi-Queue instead.  Here is an example of mq_mng in action:

Vmxnet3_MQ.png

The reason that the Gaia management interface is excluded from Multi-Queue by the tool is to ensure that the box can still be remotely managed and interacted with even if this mechanism somehow fails.  In general you want to leave the tool alone and not try to make changes (especially to number of queues in use), lest the tool stop ensuring that Multi-Queue is enabled on all interfaces but the management one.

So to @Valeriu_Cioara1's specific situation, I would recommend the following:

Change the management interface in Gaia to some other interface that you can reach for SSH/WebUI management.  Note however that Multi-Queue will not be automatically enabled on the prior management interface, see the supported steps needed to force this here: sk167200: Multi-queue state is "off" when changing the management interface

It does not appear possible to force the existing management interface to use Multi-Queue, at least that I can see.

mq_mng --set-mode manual --interface eth0 -c 0-7
error: Management interface 'eth0' cannot be configured

 

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
Valeriu_Cioara1
Explorer

Thanks guys, much appreciated... To give you an update, I changed the management interface on the appliance to one of the copper interfaces that are less used... That allowed us to have multi-queue enabled on the 10G production interface, which in turn appeared to have resolved the bottleneck of having a single core assigned to that interface...

There are still a number of questions about this new multi-queue operation and how it interacts with the SND / CoreXL dynamic split introduced in R80.40...

The output of mq_mng -o commands shows the production interfaces as "dynamic" and the management interface as "auto", with 8 times core 0 associated to it... See screenshot below...  I was expecting "auto" and "off" respectively...

Total 48 cores. Multiqueue 8 cores
i/f type state mode cores
------------------------------------------------------------------------------------------------
eth1-02 igb Up Dynamic (8/8) 0,24,12,36,1,25,13,37
eth1-03 igb Up Dynamic (8/8) 0,24,12,36,1,25,13,37
eth1-04 igb Up Dynamic (8/8) 0,24,12,36,1,25,13,37
eth1-05 igb Up Dynamic (8/8) 0,24,12,36,1,25,13,37
eth1-06 igb Up Auto (8/8)* 0,0,0,0,0,0,0,0
eth2-01 i40e Up Dynamic (8/8) 24,12,36,1,25,13,37,0
eth2-02 i40e Up Dynamic (8/8) 24,12,36,1,25,13,37,0
* Management interface

Should I stop or disable the dynamic split, in order to have multi-queue behave as described in the R80.40 Admin guides and SKs?

 

 

 

 

0 Kudos
Timothy_Hall
Champion
Champion

Dynamic Split is not enabled by default in R80.40, have you enabled it?

Your output looks correct, the management interface traffic can only be handled on core 0 but there are 8 SND/IRQ cores so the line of zeroes is expected.  

Dynamic Split will have to work with the out of the box performance tool (which is what seems to be happening) so I don't see an issue here.

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events