- Products
- Learn
- Local User Groups
- Partners
- More
Introduction to Lakera:
Securing the AI Frontier!
Quantum Spark Management Unleashed!
Check Point Named Leader
2025 Gartner® Magic Quadrant™ for Hybrid Mesh Firewall
HTTPS Inspection
Help us to understand your needs better
CheckMates Go:
SharePoint CVEs and More!
What is the best process to increase the number of SNDs (or decrease the number of workers) in a cluster?
I have a clusterXL of two gateways with 8 cpu cores with default settings so 2 SND and 6 workers.
The SND usage is around 85% while the workers are around 10%.
So I would like to change it to 3 SND and 5 workers.
As far as I understand the clusterXL member with greater number of cores changes its state to DOWN.
So if I increase the number of SNDs in the first standby member it will immediately cause a failover and it will interrupt the connections of one of the cores, correct?
I can't see if any of the standard procedures such as "zero downtime" or MVC can help with this change.
I assume that MVC doesn't support two cluster members with different number of cores.
Not exactly. Any change in the amount of FWK cores will cause the cluster to break. A member with the lowest amount of FWKs will become active. Mind, you will need to reboot a member every time you change cores assignment. To re-align, you will have to reboot the second member as well, after setting the same amount of cores.
Two reboots, two downtimes without synchronization. Hence, a service window.
In resume, it does not matter if you increase or reduce amount of SNDs. You will break cluster either way. The only reasonable alternative is to migrate from open server to Check Point appliances. There, with any supported version, you will not need to redefine SNDs and FWKs, because Danymic balancing with take care of that on the fly, without breaking a cluster, in an automated and transparent manner, not requiring any manual action from you.
More information about Dynamic Balancing is available in sk164155
Also, an important note. R80.40 support ended in April, you need to move to R81.20 ASAP.
What kind of environment are you running? Open server or an appliance? Which software version? Asking because dynamic balancing should work for you to get more SNDs on a fly, without any need for a manual choice, but it will only work for CP appliances.
In any case, cluster members require exactly the same amount of FWK cores to sync. Any manual change will cause cluster to break, and you should only do that in a service window, because some connections will be cut in the process.
Open servers. We are running r80.40 (planning to upgrade to r81.20 in a month).
So you agree that the process can't be any better than:
1) Increase SND in the standby -> It will cause a failover and break connections of one of the cores. Will the other 5 cores connections be in sync ?
2) Increase SND in the new standby -> It won't case a failover and connections will automatically sync.
Not exactly. Any change in the amount of FWK cores will cause the cluster to break. A member with the lowest amount of FWKs will become active. Mind, you will need to reboot a member every time you change cores assignment. To re-align, you will have to reboot the second member as well, after setting the same amount of cores.
Two reboots, two downtimes without synchronization. Hence, a service window.
In resume, it does not matter if you increase or reduce amount of SNDs. You will break cluster either way. The only reasonable alternative is to migrate from open server to Check Point appliances. There, with any supported version, you will not need to redefine SNDs and FWKs, because Danymic balancing with take care of that on the fly, without breaking a cluster, in an automated and transparent manner, not requiring any manual action from you.
More information about Dynamic Balancing is available in sk164155
Also, an important note. R80.40 support ended in April, you need to move to R81.20 ASAP.
I was hoping that the following (from the ClusterXL guide) would apply.
Fail-over from a Cluster Member to a peer Cluster Member with a greater number of CoreXL Firewall instances keeps all connections.
Fail-over from a Cluster Member to a peer Cluster Member with a smaller number of CoreXL Firewall instances interrupts some connections. The connections that are interrupted are those that pass through CoreXL Firewall instances that do not exist on the peer Cluster Member.
https://sc1.checkpoint.com/documents/R81/WebAdminGuides/EN/CP_R81_ClusterXL_AdminGuide/Content/Topic...
I guess the cluster will failover and break after the reboot, so the downtime is msecs, right?
Does the change and reboot on the new standby cause a cluster break too? In this case they both members end up with the same number of cores.
No - if the cores are not aligned, ClusterXL itself will not work and not sync any connections after startup ! So the service window is the only way - if both come up with the same FWK/SND config there should be only a very short break.
Absolutely , a maintenance window is required. I was just hoping to understand better the process and the impact after each of the steps.
No, you have to reboot to apply the changes to CoreXL on each node: https://sc1.checkpoint.com/documents/R81.20/WebAdminGuides/EN/CP_R81.20_PerformanceTuning_AdminGuide...
So you can do the changes and then reboot both nodes during a maintenance window - that should fix the issue...
Absolutely reboot included.
1) Increase SND in the standby (reboot req)-> It will cause a failover and break connections of one of the cores. Will the other 5 cores connections be in sync ?
2) Increase SND in the new standby (reboot req) -> It won't case a failover and connections will automatically sync, right?
You see it - not possible without breaking existing connections ! I would reboot both at the same time. ClusterXL will not work if Cores are different, SND/fwd must be the same number on both nodes.
Leaderboard
Epsum factorial non deposit quid pro quo hic escorol.
User | Count |
---|---|
12 | |
12 | |
7 | |
7 | |
7 | |
6 | |
6 | |
6 | |
5 | |
5 |
Tue 07 Oct 2025 @ 10:00 AM (CEST)
Cloud Architect Series: AI-Powered API Security with CloudGuard WAFThu 09 Oct 2025 @ 10:00 AM (CEST)
CheckMates Live BeLux: Discover How to Stop Data Leaks in GenAI Tools: Live Demo You Can’t Miss!Thu 09 Oct 2025 @ 10:00 AM (CEST)
CheckMates Live BeLux: Discover How to Stop Data Leaks in GenAI Tools: Live Demo You Can’t Miss!Wed 22 Oct 2025 @ 11:00 AM (EDT)
Firewall Uptime, Reimagined: How AIOps Simplifies Operations and Prevents OutagesAbout CheckMates
Learn Check Point
Advanced Learning
YOU DESERVE THE BEST SECURITY