Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Phill_Lunt
Participant

Sim Affinity

Jump to solution

Hi Checkmates!

I wonder if someone can help me - I need manually allocate 2 CPUs to two interfaces on our Security gateways while leaving the remaining CPUs for the gateways to automatically assign.  Is this possible and what is the process in R80.30?  Just to confirm what I need to achieve is this:

CPU0 - Eth3-01
CPU1 - Eth3-04
CPU2-5 - All remaining interfaces

Many thanks for your help!
Kind regards

P

0 Kudos
1 Solution

Accepted Solutions
Timothy_Hall
Champion
Champion

Assigning a single SND CPU to service a 10 Gbps+ interface will only get you to 4-5Gbps at best before the single CPU is saturated.  What you need to do is enable Multi-Queue for the busy interfaces and possibly reduce the number of firewall workers in your CoreXL split so there are more SNDs available to keep up with your busy interfaces.  This assumes that your firewall NIC hardware supports Multi-Queue, what model is your gateway?  Also to echo Chris we will need to see Super Seven outputs.  

Incidentally, in R81 and later all supported interfaces automatically have Multi-Queue enabled, and the CoreXL split is adjusted dynamically which would almost certainly completely avoid the issue you are experiencing.

New 2021 IPS/AV/ABOT Immersion Self-Guided Video Series
now available at http://www.maxpowerfirewalls.com

View solution in original post

10 Replies
Chris_Atkinson
Employee
Employee

I'm not sure this is really what you want.

I would recommend upgrading to R80.40 or above with dynamic balancing (sk164155) and monitor from there.

Note on systems with fewer than eight cores there are additional considerations.

0 Kudos
Phill_Lunt
Participant

Many thanks, Chris

The issue we have is that the 10Gb interfaces seem to get to a maximum of 2Gb throughout and the assigned CPU is then running at 100%.  My thought was that if we assign a dedicated CPU to each of the interfaces the throughput will be higher.  We are running R80.30 at the moment - is the upgrade an inplace uprgade or will we need to rebuild the mgmt appliance? SMS Appliance.  Thanks very much for your help.

P

0 Kudos
Chris_Atkinson
Employee
Employee

By all means start by reviewing the Super7 output and more tailored advice might then be possible.

Refer: https://community.checkpoint.com/t5/Scripts/S7PAC-Super-Seven-Performance-Assessment-Commands/td-p/4...

Otherwise rebuilding management isn't mandatory to achieve the upgrade. Creation of recovery points (backup / snapshot / migrate export ) is recommended as a precaution.

0 Kudos
Phill_Lunt
Participant

Thanks Chris, I will grab the super 7 info and report back - really appreciate your help

0 Kudos
Timothy_Hall
Champion
Champion

Assigning a single SND CPU to service a 10 Gbps+ interface will only get you to 4-5Gbps at best before the single CPU is saturated.  What you need to do is enable Multi-Queue for the busy interfaces and possibly reduce the number of firewall workers in your CoreXL split so there are more SNDs available to keep up with your busy interfaces.  This assumes that your firewall NIC hardware supports Multi-Queue, what model is your gateway?  Also to echo Chris we will need to see Super Seven outputs.  

Incidentally, in R81 and later all supported interfaces automatically have Multi-Queue enabled, and the CoreXL split is adjusted dynamically which would almost certainly completely avoid the issue you are experiencing.

New 2021 IPS/AV/ABOT Immersion Self-Guided Video Series
now available at http://www.maxpowerfirewalls.com
Phill_Lunt
Participant

Hi @Timothy_Hall  - thanks so much for this information.  We have 13800 appliances (nearing end of life!) and in fact we only need to support a 5Gb Internet circuit so we might get away with assigning a single CPU if indeed multiqueue is not supported on our hardware.  I will get the Super7 info sorted as soon as I can.

Thanks both - really really helpful!

 

0 Kudos
Phill_Lunt
Participant

Ah - we have 10Gb expansion card in the appliance - model is CPAC-4-10F

Thank you

0 Kudos
Phill_Lunt
Participant

Looks like mutiqueue is supported but currently off:

cpmq get

Active ixgbe interfaces:
eth3-01 [Off]
eth3-04 [Off]

Active igb interfaces:
eth1-02 [Off]
eth1-03 [Off]
eth1-04 [Off]
eth1-05 [Off]
eth1-06 [Off]
eth1-07 [Off]
eth2-01 [Off]
eth2-02 [Off]
eth2-03 [Off]

0 Kudos
Chris_Atkinson
Employee
Employee

This is where you should consider starting your tuning efforts if you need quick wins.

Note 13800 appliances only support R80.40 and lower.

0 Kudos
Phill_Lunt
Participant

Hi Timothy

Many thanks for your help with this.  We adjusted the number of CPUs available to the NICs (using cpconfig) and also enabled multiqueue on the 10Gb Interfaces.  This has vastly improved the throughput!  Thanks again to you and @Chris_Atkinson 

Kind regards

Phill