- Products
- Learn
- Local User Groups
- Partners
- More
Firewall Uptime, Reimagined
How AIOps Simplifies Operations and Prevents Outages
Introduction to Lakera:
Securing the AI Frontier!
Check Point Named Leader
2025 Gartner® Magic Quadrant™ for Hybrid Mesh Firewall
HTTPS Inspection
Help us to understand your needs better
CheckMates Go:
SharePoint CVEs and More!
We are having R77.30 VSX with 16 licensed CPU cores and 4 virtual systems, initially corexl was was configured with 14 fwk instances and then we change it to 12,rebooted gateway and in SmartConsole assign 3 cores for each VS to have some free cpu cores for interfaces, when load increase.
Current ouput of affinity:
# fw ctl affinity -l -a
eth6: CPU 1
eth7: CPU 1
eth8: CPU 1
eth9: CPU 1
eth0: CPU 1
eth11: CPU 1
eth10: CPU 1
eth5: CPU 1
VS_0 fwk: CPU 0
VS_1 fwk: CPU 2 3 4 5 6 7 8 9 10 11 12 13 14 15
VS_2 fwk: CPU 2 3 4 5 6 7 8 9 10 11 12 13 14 15
VS_3 fwk: CPU 2 3 4 5 6 7 8 9 10 11 12 13 14 15
VS_4 fwk: CPU 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Is this as it should be ? i would suspect that amount of CPU cores assigned to VS_1-4 will be just 12 and not 14 (CPU 2-15).
Per CP TAC we should manually define which cores should be assigned to VS systems. Does anybody have same experience?
You configure VS cores on the gateway object GUI and not cpconfig (that only adjust cores on VS0 and normally i would not enable CoreXL on VS0 at all). For example VS below is configured with 6 cores
Then you may use fw ctl affinity and sim affinity (for SXL) commands to adjust which cores handle which VS
Below is a sample of VSX with 7 VS and 20 core system. Cores 0-3 are reserved for SXL / Interfaces and rest is set depending on VS requirements. VS3 and VS6 in this case has dedicated cores for Identity Awareness btw.
fw ctl affinity -l
eth1-01: CPU 1
eth1-02: CPU 2
eth1-03: CPU 3
eth1-04: CPU 3
eth1-06: CPU 0
eth2-01: CPU 2
eth2-02: CPU 3
Mgmt: CPU 1
Sync: CPU 1
VS_0: CPU 4
VS_0 fwk: CPU 4
VS_1: CPU 4
VS_1 fwk: CPU 4
VS_2: CPU 4
VS_2 fwk: CPU 4
VS_3: CPU 7
VS_3 fwk: CPU 7
VS_3 pdpd: CPU 5 6
VS_3 pepd: CPU 5 6
VS_4: CPU 7
VS_4 fwk: CPU 7
VS_5: CPU 4
VS_5 fwk: CPU 4
VS_6: CPU 10 11 12 13 14 15 16 17 18 19
VS_6 fwk: CPU 10 11 12 13 14 15 16 17 18 19
VS_6 pdpd: CPU 5 6
VS_6 pepd: CPU 5 6
VS_7: CPU 7
VS_7 fwk: CPU 7
For config above you would run these commands
fw ctl affinity -s -d -vsid 6 -cpu 10-19
fw ctl affinity -s -d -vsid 0 -cpu 4
fw ctl affinity -s -d -vsid 1 -cpu 4
fw ctl affinity -s -d -vsid 2 -cpu 4
fw ctl affinity -s -d -vsid 5 -cpu 4
fw ctl affinity -s -d -vsid 3 -cpu 7
fw ctl affinity -s -d -vsid 4 -cpu 7
fw ctl affinity -s -d -vsid 7 -cpu 7
fw ctl affinity -s -d -pname pdpd -vsid 3 -cpu 5 6
fw ctl affinity -s -d -pname pepd -vsid 3 -cpu 5 6
fw ctl affinity -s -d -pname pdpd -vsid 6 -cpu 5 6
fw ctl affinity -s -d -pname pepd -vsid 6 -cpu 5 6
So you kind of have to look at your current CPU consumption and design it from there.
Remember there is no right or wrong answer - it all depends on your environment and requirements. We tend to allocate dedicated cores for "important" VSes and less important share the same cores so in case something goes terribly wrong on less important VSes, they do not affect important ones at all. Feel free to PM if you want to
You configure VS cores on the gateway object GUI and not cpconfig (that only adjust cores on VS0 and normally i would not enable CoreXL on VS0 at all). For example VS below is configured with 6 cores
Then you may use fw ctl affinity and sim affinity (for SXL) commands to adjust which cores handle which VS
Below is a sample of VSX with 7 VS and 20 core system. Cores 0-3 are reserved for SXL / Interfaces and rest is set depending on VS requirements. VS3 and VS6 in this case has dedicated cores for Identity Awareness btw.
fw ctl affinity -l
eth1-01: CPU 1
eth1-02: CPU 2
eth1-03: CPU 3
eth1-04: CPU 3
eth1-06: CPU 0
eth2-01: CPU 2
eth2-02: CPU 3
Mgmt: CPU 1
Sync: CPU 1
VS_0: CPU 4
VS_0 fwk: CPU 4
VS_1: CPU 4
VS_1 fwk: CPU 4
VS_2: CPU 4
VS_2 fwk: CPU 4
VS_3: CPU 7
VS_3 fwk: CPU 7
VS_3 pdpd: CPU 5 6
VS_3 pepd: CPU 5 6
VS_4: CPU 7
VS_4 fwk: CPU 7
VS_5: CPU 4
VS_5 fwk: CPU 4
VS_6: CPU 10 11 12 13 14 15 16 17 18 19
VS_6 fwk: CPU 10 11 12 13 14 15 16 17 18 19
VS_6 pdpd: CPU 5 6
VS_6 pepd: CPU 5 6
VS_7: CPU 7
VS_7 fwk: CPU 7
For config above you would run these commands
fw ctl affinity -s -d -vsid 6 -cpu 10-19
fw ctl affinity -s -d -vsid 0 -cpu 4
fw ctl affinity -s -d -vsid 1 -cpu 4
fw ctl affinity -s -d -vsid 2 -cpu 4
fw ctl affinity -s -d -vsid 5 -cpu 4
fw ctl affinity -s -d -vsid 3 -cpu 7
fw ctl affinity -s -d -vsid 4 -cpu 7
fw ctl affinity -s -d -vsid 7 -cpu 7
fw ctl affinity -s -d -pname pdpd -vsid 3 -cpu 5 6
fw ctl affinity -s -d -pname pepd -vsid 3 -cpu 5 6
fw ctl affinity -s -d -pname pdpd -vsid 6 -cpu 5 6
fw ctl affinity -s -d -pname pepd -vsid 6 -cpu 5 6
So you kind of have to look at your current CPU consumption and design it from there.
Remember there is no right or wrong answer - it all depends on your environment and requirements. We tend to allocate dedicated cores for "important" VSes and less important share the same cores so in case something goes terribly wrong on less important VSes, they do not affect important ones at all. Feel free to PM if you want to
Thanks for explanation.
For VSX you can simply disable CoreXconfig L using cpas that only pertains to VS0 aka the management VS or host.
CoreXL for a VS is set from SmartConsole. You set the amount of instances per VS that you want to run. This is because the VS's are actually user mode processes contrary to kernel processes which makes this a different approach from a physical gateway. The built-in Linux scheduler does a realtime check to see what core is loaded the least and dynamically assigns the core to a process (in this example the fwk).
This means you can both undersubscribe and oversubscribe and also apply relative weights per VS (by making one more important over the other)
Hope that helps
Peter !!
My reply was mangled a little, the initial sentence should read:
For VSX you can simply disable CoreXL using cpconfig as that only pertains to VS0 aka the management VS or host.
Firstly the question, what do you mean saying: "We are having R77.30 VSX with 16 licensed CPU cores and 4 virtual systems, initially corexl was was configured with 14 fwk instances and then we change it to 12"?
Do I understand correctly that is about amount of SXL cores? You have two by default and assigned two more?
Now, after doing that, you need to set up affinity for NICs and VSs. In your output, there is only Core 1 that is assigned to NICs. Same per VS; if you do not change settings manually, all remaining cores will be assigned randomly and listed all together, as in your output above.
Please refer to ATRG: CoreXL when adjusting the settings
Via cpconfig we had it configured with 14 instances for CoreXL, now it's disabled completely and cores are manually assigned via affinity and interfaces are in auto mode.
fw ctl affinity -l -a
eth6: CPU 1
eth7: CPU 4
eth8: CPU 2
eth9: CPU 3
eth0: CPU 1
eth1: CPU 5
eth10: CPU 4
eth11: CPU 5
eth2: CPU 3
eth3: CPU 3
eth4: CPU 5
eth5: CPU 2
VS_0 fwk: CPU 0
VS_1 fwk: CPU 6 7 8
VS_2 fwk: CPU 9 10 11
VS_3 fwk: CPU 12 13
VS_4 fwk: CPU 14 15
The current license permits the use of CPUs 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 only.
Hold on. How many cores do you use for SXL? you said 4, right? I can see Cores 4 and 5 also assigned to NICs. You should only assign those which are bined to SXL.
Question when dynamic split is available on VSX (Which I'm told is coming) is there still a place for manual affinity?
I believe you can choose either, similar to physical FW case
Correct, and similar to R80.40, the manual affinity will be the default one, Dynamic Split will need to be enabled.
Do we have an ETA on this? I appreciate this may not be something that can be answered, but hey may as well ask the question.
Sure, it is targeted to the upcoming JHFs.
Leaderboard
Epsum factorial non deposit quid pro quo hic escorol.
User | Count |
---|---|
17 | |
12 | |
12 | |
11 | |
11 | |
7 | |
7 | |
6 | |
6 | |
5 |
Wed 22 Oct 2025 @ 11:00 AM (EDT)
Firewall Uptime, Reimagined: How AIOps Simplifies Operations and Prevents OutagesTue 28 Oct 2025 @ 11:00 AM (EDT)
Under the Hood: CloudGuard Network Security for Google Cloud Network Security Integration - OverviewAbout CheckMates
Learn Check Point
Advanced Learning
YOU DESERVE THE BEST SECURITY