- Products
- Learn
- Local User Groups
- Partners
- More
MVP 2026: Submissions
Are Now Open!
What's New in R82.10?
10 December @ 5pm CET / 11am ET
Announcing Quantum R82.10!
Learn MoreOverlap in Security Validation
Help us to understand your needs better
CheckMates Go:
Maestro Madness
Hi,
In one of our R80.20 firewalls the CPU utilization on one core is very high, while the other cores are almost idle. This started after the installation on JHA 183 on the firewall cluster.
While perhaps not related to the upgrade itself, the CPU affinity is now in an undesirable state as eth6 and eth7 are 10 Gbps interfaces while eth0, eth2 and eth3 are 1 Gbps interfaces.
firewall> fw ctl affinity -l -r
CPU 0: eth6 eth7 eth0
CPU 1: eth2 eth3
CPU 2: fw_5
lpd rtmd fwd wsdnsd mpdaemon in.asessiond cpd cprid
CPU 3: fw_4
lpd rtmd fwd wsdnsd mpdaemon in.asessiond cpd cprid
CPU 4: fw_3
lpd rtmd fwd wsdnsd mpdaemon in.asessiond cpd cprid
CPU 5: fw_2
lpd rtmd fwd wsdnsd mpdaemon in.asessiond cpd cprid
CPU 6: fw_1
lpd rtmd fwd wsdnsd mpdaemon in.asessiond cpd cprid
CPU 7: fw_0
lpd rtmd fwd wsdnsd mpdaemon in.asessiond cpd cprid
CPU 8:
CPU 9:
CPU 10:
CPU 11:
All:
The current license permits the use of CPUs 0, 1, 2, 3, 4, 5, 6, 7 only.
We previously used the following procedure to change affinity and improve utilization across cores
I have a few questions that I hope you could answer:
Thanks for your help!
Best regards,
Harry
To me, that looks right.
You might consider licensing those additional cores so you can leverage all the cores (and potentially use multiqueue).
You didn't say which kind of core was saturated (SND vs. Firewall Worker) after the JHFA application but I am assuming it is an SND core due to this fix included in your JHFA level which makes much more traffic eligible for full acceleration by SecureXL in some circumstances: sk166700: High CPU after upgrade from R77.x to R80.x when running only Firewall and Monitoring blade... This is a great problem to have. 🙂
Generally you should avoid static CPU allocations for SoftIRQ via sim affinity wherever possible and enable Multi-Queue on your 10Gbps interfaces; one CPU core (even if dedicated to only one 10Gbps interface) will start getting saturated around 4-5Gbps and start losing frames (RX-DRP as shown by netstat -ni). What I would recommend:
1) If more than 70% of your traffic is fully-accelerated (Accelerated pkts [not conns] shown by fwaccel stats -s) configure a 4/4 split with cpconfig. Otherwise your 3/5 split should be fine for now.
2) Run sim affinity -a to set all interface affinities back to auto mode. You may need to reboot after doing this, can't remember.
3) Enable Multi-Queue on your 10Gbps interfaces; Multi-Queue can only be active on a maximum of 5 physical interfaces in your kernel version. You will most definitely need to reboot after making this change.
4) After reboot all SND/IRQ cores will be able to service the 10Gbps interfaces, thus spreading the load out more evenly among them and hopefully avoiding excessive RX-DRP frame loss.
As far as having 12 cores but only being licensed for 8, I have seen some strange effects happen when there is this kind of mismatch but based on your command outputs I think your firewall is handling this situation fine. You taskset core allocations as you have them configured are OK, be sure to update them if you change the number of SND cores to avoid wayward processes from grabbing CPU time on the SND cores and trashing their CPU fast cache.
Thank you very much @Timothy_Hall for your suggestions!
In the end I used my initial plan with sim affinity since we are (unfortunately) running tg3 and be2net drivers that from what I understand do not allow multi-queue.
Will try to ensure that we use better NICs when we refresh the hardware on the open servers.
Best regards,
Harry
tg3 and be2net in use on your firewall? My condolences...
@ tg3 and be2net in use on your firewall? My condolences...that made me laugh LOL
I will share very simple way I fixed this with customers few times (cant guarantee it would work)
cpconfig -> disable corexl -> reboot -> cpconfig -> re-enable corexl -> reboot again -> check (make sure you do it on both fws if its a cluster_
I never figured out why this worked, but I guess like with anything else, it probably "resets" the corexl config and starts fresh
Let us know if you tried that.
Andy
However, if what I suggested above fails, I would try below article to debug it and send to TAC:
sk43443
Leaderboard
Epsum factorial non deposit quid pro quo hic escorol.
| User | Count |
|---|---|
| 16 | |
| 12 | |
| 8 | |
| 7 | |
| 6 | |
| 4 | |
| 3 | |
| 3 | |
| 2 | |
| 2 |
Fri 12 Dec 2025 @ 10:00 AM (CET)
Check Mates Live Netherlands: #41 AI & Multi Context ProtocolTue 16 Dec 2025 @ 05:00 PM (CET)
Under the Hood: CloudGuard Network Security for Oracle Cloud - Config and Autoscaling!Fri 12 Dec 2025 @ 10:00 AM (CET)
Check Mates Live Netherlands: #41 AI & Multi Context ProtocolTue 16 Dec 2025 @ 05:00 PM (CET)
Under the Hood: CloudGuard Network Security for Oracle Cloud - Config and Autoscaling!Thu 18 Dec 2025 @ 10:00 AM (CET)
Cloud Architect Series - Building a Hybrid Mesh Security Strategy across cloudsAbout CheckMates
Learn Check Point
Advanced Learning
YOU DESERVE THE BEST SECURITY