- Products
- Learn
- Local User Groups
- Partners
- More
Policy Insights and Policy Auditor in Action
19 November @ 5pm CET / 11am ET
Access Control and Threat Prevention Best Practices
Watch HereOverlap in Security Validation
Help us to understand your needs better
CheckMates Go:
Maestro Madness
Hi Checkmates!
I wonder if someone can help me - I need manually allocate 2 CPUs to two interfaces on our Security gateways while leaving the remaining CPUs for the gateways to automatically assign. Is this possible and what is the process in R80.30? Just to confirm what I need to achieve is this:
CPU0 - Eth3-01
CPU1 - Eth3-04
CPU2-5 - All remaining interfaces
Many thanks for your help!
Kind regards
P
Assigning a single SND CPU to service a 10 Gbps+ interface will only get you to 4-5Gbps at best before the single CPU is saturated. What you need to do is enable Multi-Queue for the busy interfaces and possibly reduce the number of firewall workers in your CoreXL split so there are more SNDs available to keep up with your busy interfaces. This assumes that your firewall NIC hardware supports Multi-Queue, what model is your gateway? Also to echo Chris we will need to see Super Seven outputs.
Incidentally, in R81 and later all supported interfaces automatically have Multi-Queue enabled, and the CoreXL split is adjusted dynamically which would almost certainly completely avoid the issue you are experiencing.
I'm not sure this is really what you want.
I would recommend upgrading to R80.40 or above with dynamic balancing (sk164155) and monitor from there.
Note on systems with fewer than eight cores there are additional considerations.
Many thanks, Chris
The issue we have is that the 10Gb interfaces seem to get to a maximum of 2Gb throughout and the assigned CPU is then running at 100%. My thought was that if we assign a dedicated CPU to each of the interfaces the throughput will be higher. We are running R80.30 at the moment - is the upgrade an inplace uprgade or will we need to rebuild the mgmt appliance? SMS Appliance. Thanks very much for your help.
P
By all means start by reviewing the Super7 output and more tailored advice might then be possible.
Otherwise rebuilding management isn't mandatory to achieve the upgrade. Creation of recovery points (backup / snapshot / migrate export ) is recommended as a precaution.
Thanks Chris, I will grab the super 7 info and report back - really appreciate your help
Assigning a single SND CPU to service a 10 Gbps+ interface will only get you to 4-5Gbps at best before the single CPU is saturated. What you need to do is enable Multi-Queue for the busy interfaces and possibly reduce the number of firewall workers in your CoreXL split so there are more SNDs available to keep up with your busy interfaces. This assumes that your firewall NIC hardware supports Multi-Queue, what model is your gateway? Also to echo Chris we will need to see Super Seven outputs.
Incidentally, in R81 and later all supported interfaces automatically have Multi-Queue enabled, and the CoreXL split is adjusted dynamically which would almost certainly completely avoid the issue you are experiencing.
Hi @Timothy_Hall - thanks so much for this information. We have 13800 appliances (nearing end of life!) and in fact we only need to support a 5Gb Internet circuit so we might get away with assigning a single CPU if indeed multiqueue is not supported on our hardware. I will get the Super7 info sorted as soon as I can.
Thanks both - really really helpful!
Ah - we have 10Gb expansion card in the appliance - model is CPAC-4-10F
Thank you
Looks like mutiqueue is supported but currently off:
cpmq get
Active ixgbe interfaces:
eth3-01 [Off]
eth3-04 [Off]
Active igb interfaces:
eth1-02 [Off]
eth1-03 [Off]
eth1-04 [Off]
eth1-05 [Off]
eth1-06 [Off]
eth1-07 [Off]
eth2-01 [Off]
eth2-02 [Off]
eth2-03 [Off]
This is where you should consider starting your tuning efforts if you need quick wins.
Note 13800 appliances only support R80.40 and lower.
Hi Timothy
Many thanks for your help with this. We adjusted the number of CPUs available to the NICs (using cpconfig) and also enabled multiqueue on the 10Gb Interfaces. This has vastly improved the throughput! Thanks again to you and @Chris_Atkinson
Kind regards
Phill
Leaderboard
Epsum factorial non deposit quid pro quo hic escorol.
| User | Count |
|---|---|
| 24 | |
| 18 | |
| 13 | |
| 12 | |
| 12 | |
| 10 | |
| 6 | |
| 5 | |
| 5 | |
| 4 |
Wed 19 Nov 2025 @ 11:00 AM (EST)
TechTalk: Improve Your Security Posture with Threat Prevention and Policy InsightsThu 20 Nov 2025 @ 05:00 PM (CET)
Hacking LLM Applications: latest research and insights from our LLM pen testing projects - AMERThu 20 Nov 2025 @ 10:00 AM (CST)
Hacking LLM Applications: latest research and insights from our LLM pen testing projects - EMEAWed 26 Nov 2025 @ 12:00 PM (COT)
Panama City: Risk Management a la Parrilla: ERM, TEM & Meat LunchWed 19 Nov 2025 @ 11:00 AM (EST)
TechTalk: Improve Your Security Posture with Threat Prevention and Policy InsightsThu 20 Nov 2025 @ 05:00 PM (CET)
Hacking LLM Applications: latest research and insights from our LLM pen testing projects - AMERThu 20 Nov 2025 @ 10:00 AM (CST)
Hacking LLM Applications: latest research and insights from our LLM pen testing projects - EMEAThu 04 Dec 2025 @ 12:30 PM (SGT)
End-of-Year Event: Securing AI Transformation in a Hyperconnected World - APACThu 04 Dec 2025 @ 03:00 PM (CET)
End-of-Year Event: Securing AI Transformation in a Hyperconnected World - EMEAWed 26 Nov 2025 @ 12:00 PM (COT)
Panama City: Risk Management a la Parrilla: ERM, TEM & Meat LunchAbout CheckMates
Learn Check Point
Advanced Learning
YOU DESERVE THE BEST SECURITY