- Products
- Learn
- Local User Groups
- Partners
- More
Firewall Uptime, Reimagined
How AIOps Simplifies Operations and Prevents Outages
Introduction to Lakera:
Securing the AI Frontier!
Check Point Named Leader
2025 Gartner® Magic Quadrant™ for Hybrid Mesh Firewall
HTTPS Inspection
Help us to understand your needs better
CheckMates Go:
SharePoint CVEs and More!
Have recently begun supporting a cluster which the previous admin had changed the default config of CoreXL, i.e. all 4 Cores are now operating as both SND and FW workers. Would like to reset this to default settings, any idea how this can be accomplished?
Assuming a clean install would accomplish this but it seems a bit drastic.
Wondering if disable and enable of CoreXL might do it?
thoughts/suggestions?
Thanks in advance!
You should be able to change the split via cpconfig.
The split for a four core should be 1/3 (one SND, three worker).
The instances you don't configure for workers will become SNDs.
Which means: if you say three workers, you'll get one SND.
A reboot will be required.
Thanks Dameon, I have tried this and still have the same result.
Let me backup a bit and explain my thinking. while reviewing the health check output for this cluster I see the following:
SND/FW Core Overlap - WARNING
Cores detected operating as both fw workers and SNDs. Please review sk98737 and sk98348 for more information.
CoreXL Settings:
Interface Mgmt: CPU 0
Interface eth1-01: has multi queue enabled
Interface Sync: has multi queue enabled
Interface eth1-02: has multi queue enabled
Interface eth1-03: has multi queue enabled
Interface eth1-04: has multi queue enabled
Initially I ignored the messages indicating that multi queue was enabled and focused on determining why the cores were setup as both SNDs and FW workers. Which led me to a lot of reading and the initial question.
Now, after more reading on Multi queue it seems that this may be why I am seeing the cores as both SND and FW workers
Attaching an image of what I see in cpview and fwaccel stat.
I am struggling to understand if the health check warning is something to be concerned with or not, have seen it very clearly stated that running cores as both SND and FW workers is not generally recommended.
The other thing that is odd to me is the name shown in fwaccel stat "KPPAK" is new to me, typically see SND..
Thanks!
I think we need Super Seven Commands output here to see what's going on.
https://community.checkpoint.com/t5/Scripts/S7PAC-Super-Seven-Performance-Assessment-Commands/m-p/40...
What version and Jumbo HFA are you running? What is the hardware model of your gateway?
What you have posted so far makes no sense to me, please provide Super Seven output.
The KPPAK would seem to indicate that PPAK/SecureXL is running in the kernel as opposed to process space; running PPAK in process space is listed as a new feature in R81.20 but it is common for features like this to be present in earlier releases unofficially; this new output must have been added in a Jumbo HFA or R81.10.
My interpretation of the fw ctl affinity command is that USFW is enabled along with Dynamic Split/Balancing sk164155: Dynamic Balancing for CoreXL) which are both enabled by default. What does the output of dynamic_balancing -p
show? If both are enabled I think this output is expected and the healthcheck script needs to be updated, as SNDs and Firewall workers/instances will indeed to be sharing cores (but hopefully only during split transitions) with Dynamic Balancing enabled.
[Expert@GW-1:0]# dynamic_balancing -p
Dynamic Balancing is currently Off
Looking at sk164155 that you have mentioned above it appears that Dynamic Balancing is not supported on the 6200B platform.
Since this is a 6200B, which only has 2 physical cores, and 4 logical cores, Dynamic Balancing will not be supported as it requires at least 4 physical cores.
I'm almost certain that it is expected for FW workers and SNDs to share cores on 2 cores machines, but, to be on the safe side:
Another thing to check is that '$FWDIR/conf/fwaffinity.conf' hasn't been changed, the only non-comment line should be "i default auto".
Thanks Amit,
Ran the commands and it shows 3 workers, did the 'mq_mng -s auto' and a reboot and everything is the same as before, still showing me the 4 cores operating as both SND and FW workers.
[Expert@GW-1:0]# $FWDIR/boot/fwboot corexl def_instance4_count
[Expert@GW-1:0]# echo $?
3
[Expert@GW-1:0]# mq_mng -s auto
[Expert@GW-1:0]#
Also checked fwaffinity.conf and it is in default state.
So, does this mean that the systems are in an expected "default" state?
If so, then I agree with Tim that the Health Check Script should be updated so it's not throwing a warning.
@AmitShmuel is right that on systems with only two physical cores, seeing SND and workers on the same core is expected behavior.
Which suggests the healthcheck script should probably be updated to account for this.
@ShaiF can you take a look?
@AmitShmuel I'm familiar with 3/1 (fwk/ppak) for 4 cores machine. ppak/fw working on same core is always bad practice (locks, softlockups, latency...) if your official say is that we work 4/4 then please send mail to me and @AndyY and we'll adjust the test.
3/1 will be used for 4 physical cores machines, such as 3200.
Another example where ppak/fw working on the same cores can be seen in 2 physical cores machines, such as 5200.
Leaderboard
Epsum factorial non deposit quid pro quo hic escorol.
User | Count |
---|---|
16 | |
12 | |
6 | |
6 | |
6 | |
5 | |
4 | |
4 | |
4 | |
3 |
Tue 07 Oct 2025 @ 10:00 AM (CEST)
Cloud Architect Series: AI-Powered API Security with CloudGuard WAFThu 09 Oct 2025 @ 10:00 AM (CEST)
CheckMates Live BeLux: Discover How to Stop Data Leaks in GenAI Tools: Live Demo You Can’t Miss!Thu 09 Oct 2025 @ 10:00 AM (CEST)
CheckMates Live BeLux: Discover How to Stop Data Leaks in GenAI Tools: Live Demo You Can’t Miss!Wed 22 Oct 2025 @ 11:00 AM (EDT)
Firewall Uptime, Reimagined: How AIOps Simplifies Operations and Prevents OutagesAbout CheckMates
Learn Check Point
Advanced Learning
YOU DESERVE THE BEST SECURITY