- Products
- Learn
- Local User Groups
- Partners
- More
Access Control and Threat Prevention Best Practices
5 November @ 5pm CET / 11am ET
Ask Check Point Threat Intelligence Anything!
October 28th, 9am ET / 3pm CET
Check Point Named Leader
2025 Gartner® Magic Quadrant™ for Hybrid Mesh Firewall
HTTPS Inspection
Help us to understand your needs better
CheckMates Go:
Spark Management Portal and More!
Hi!
I am about to replace a R80.30 vsx vsls cluster to R81.10 (from 13800 to 16200 appliances)
The default multi queue and affinity is like below which I believe will be perfectly fine in my environment.
[Expert@vsx20:0]# mq_mng -o
Total 48 cores. Available for MQ 4 cores
i/f driver driver mode state mode (queues) cores
actual/avail
------------------------------------------------------------------------------------------------
Mgmt igb Kernel Up Auto (4/4) 0,24,1,25
Sync igb Kernel Up Auto (4/4) 0,24,1,25
eth1-01 igb Kernel Up Auto (4/4) 0,24,1,25
eth2-01 i40e Kernel Up Auto (4/4) 0,24,1,25
eth2-02 i40e Kernel Up Auto (4/4) 0,24,1,25
eth3-01 i40e Kernel Up Auto (4/4) 0,24,1,25
eth3-02 i40e Kernel Up Auto (4/4) 0,24,1,25
[Expert@vsx20:0]# fw ctl affinity -l
VS_0 fwk: CPU 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
VS_1 fwk: CPU 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
VS_2 fwk: CPU 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
VS_3 fwk: CPU 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
VS_4 fwk: CPU 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
VS_5 fwk: CPU 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
VS_6 fwk: CPU 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
VS_7 fwk: CPU 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
VS_8 fwk: CPU 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
Interface eth3-01: has multi queue enabled
Interface Mgmt: has multi queue enabled
Interface Sync: has multi queue enabled
Interface eth1-01: has multi queue enabled
Interface eth3-02: has multi queue enabled
Interface eth2-01: has multi queue enabled
Interface eth2-02: has multi queue enabled
In case I need more SND cores I believe the preferred setup would be
MQ: 0-1 12-13 24-25 36-37
Affinity: 2-11 14-23 26-35 38-47
I have tried to find the commands to configure that but I fail.
Anyone?
Best regards
Mattias
Have you considered using Dynamic Balancing (sk164155) to manage elements of this?
Otherwise (specific affinity asside), set new number of FWK
fw ctl affinity -s -d -fwkall XX
reboot
Hi,
Yes, this guide is based on regular gateway - the idea is the same but different cmds needs to be set for VSX.
Let's say based on 16200:
fw ctl affinity -s -d -fwkall 39
mq_mng -s manual -c 0-1 12-13 24-25 36-37
reboot
If needed then check and re-configure FWK cores to avoid collisions with SND:
fw ctl affinity -s -d -vsid x-y -cpu 2-11 14-23 26-35 38-47, where x-y is range of your VS's
you can also exclude some cores for heavy utilized processes like FWD and assign them for particular VS ex. for VS2:
fw ctl affinity -s -d -vsid x-y -cpu 2-11 14-23 26-35 38-46
fw ctl affinity -s -d -pname fwd -vsid 2 -cpu 47
BR
Daniel.
The amount of SND's is set throught VSX itself, try to change it via cpconfig on VS0 context.
I thought that corexl should be disabled on vs0 in vsx?
Configuration Options:
----------------------
(1) Licenses and contracts
(2) SNMP Extension
(3) PKCS#11 Token
(4) Random Pool
(5) Secure Internal Communication
(6) Disable cluster membership for this gateway
(7) Disable Check Point Per Virtual System State
(8) Disable Check Point ClusterXL for Bridge Active/Standby
(9) Hyper-Threading
(10) Check Point CoreXL
(11) Automatic start of Check Point Products
(12) Exit
Enter your choice (1-12) :10
Configuring Check Point CoreXL...
=================================
CoreXL is currently disabled.
Would you like to enable CoreXL (y/n) [y] ?
Have you considered using Dynamic Balancing (sk164155) to manage elements of this?
Otherwise (specific affinity asside), set new number of FWK
fw ctl affinity -s -d -fwkall XX
reboot
Dynamic balancing is a great feature for vsx, but I don't dare to enable it yet in production.
I tried fw ctl affinity -s -d -fwkall XX, but then fwk used the same core as snd?
I'm using it on VSX. It seems to work great, however there is a bug related to starting dynamic balancing after a reboot. This is now resolved I believe (might be in JHFA75 which has just been released as ongoing)
I would say in most cases Dynamic balancing aims to balance SND cores around 50% overall utilisation during peak hours.
Hello can you explain Better the bug?
Hi Mattias,
Coming R81.20 will have Dynamic Balancing on by default on VSX.
I'd be happy to ease any concern you have regarding production enablement.
Feel free to approach me at amitshm@checkpoint.com.
Thanks,
Amit
Hi!
When it comes to Check point i always try to use the default settings when possible to avoid issues.
That is why I don't want to enable Dynamic balancing as it is off by default off in R81.10
Good news that is enabled by default in R81.20. It will be nice not to deal with core tuning in the future.
/Mattias
When using mq_mng in auto mode, it works perfectly fine with the fw ctl affinity -s -d -fwkall XX command. With or without hyper threading enabled. The cores are setup correctly automatic after reboot.
Thank you!
use 'cat /proc/cpuinfo' - this will show the physical cores layout.
This matters regarding how L2 cache is shared on the cores.
https://www.quora.com/How-is-L2-cache-shared-between-different-cores-in-a-CPU
If this matters in a SND/FWK spread I have no idea, but something Dynamic Balancing surely does not take into account.
I would suspect heavily utilized multithreated virtual systems benefit from sharing L2 cores. But not sure even if Check Point knows.
Internally we have always assigned VSs and SND based on this, simply due to common sense.
The snd/fwk suggestion I wrote is taken from Performance tuning R81.10 Administration Guide: https://sc1.checkpoint.com/documents/R81.10/WebAdminGuides/EN/CP_R81.10_PerformanceTuning_AdminGuide...
The syntax for changing the snd/fwk doesn't seem to apply for vsx though?
Hi,
Yes, this guide is based on regular gateway - the idea is the same but different cmds needs to be set for VSX.
Let's say based on 16200:
fw ctl affinity -s -d -fwkall 39
mq_mng -s manual -c 0-1 12-13 24-25 36-37
reboot
If needed then check and re-configure FWK cores to avoid collisions with SND:
fw ctl affinity -s -d -vsid x-y -cpu 2-11 14-23 26-35 38-47, where x-y is range of your VS's
you can also exclude some cores for heavy utilized processes like FWD and assign them for particular VS ex. for VS2:
fw ctl affinity -s -d -vsid x-y -cpu 2-11 14-23 26-35 38-46
fw ctl affinity -s -d -pname fwd -vsid 2 -cpu 47
BR
Daniel.
Hi,
Dynamic Balancing is aware of NUMA and HTT, and maintains their order symmetrically.
i.e.:
- It adds SNDs on physical CPUs (2 logical cores are added on each change)
- When adding SNDs, it strives to maintain an equal amount per socket
Thank you for disproving my uneducated guess! 🙂
Leaderboard
Epsum factorial non deposit quid pro quo hic escorol.
User | Count |
---|---|
21 | |
9 | |
9 | |
7 | |
7 | |
7 | |
6 | |
5 | |
5 | |
4 |
Wed 22 Oct 2025 @ 11:00 AM (EDT)
Firewall Uptime, Reimagined: How AIOps Simplifies Operations and Prevents OutagesTue 28 Oct 2025 @ 11:00 AM (EDT)
Under the Hood: CloudGuard Network Security for Google Cloud Network Security Integration - OverviewWed 22 Oct 2025 @ 11:00 AM (EDT)
Firewall Uptime, Reimagined: How AIOps Simplifies Operations and Prevents OutagesTue 28 Oct 2025 @ 11:00 AM (EDT)
Under the Hood: CloudGuard Network Security for Google Cloud Network Security Integration - OverviewTue 28 Oct 2025 @ 12:30 PM (EDT)
Check Point & AWS Virtual Immersion Day: Web App ProtectionAbout CheckMates
Learn Check Point
Advanced Learning
YOU DESERVE THE BEST SECURITY