- Products
- Learn
- Local User Groups
- Partners
- More
The State of Ransomware Q1 2026
Key Trends and Their Impact
Good, Better, Best:
Prioritizing Defenses Against Credential Abuse
AI Security Masters E7:
How CPR Broke ChatGPT's Isolation and What It Means for You
Blueprint Architecture for Securing
The AI Factory & AI Data Center
Call For Papers
Your Expertise. Our Stage
CheckMates Go:
CheckMates Fest
Hello Guys,
i have a query here and would be grateful if you guys assist me
we are planning to replace the current 5000 cluster setup with new 9000 cluster setup
client wants it to be done in phase wise way to minimise the over all downtime hence my question here is that is it supported in non maestro setups to do mix and match
like can we replce the secondary 5000 GW with new 9000 GW reset the sic and then deploy policy on cluster and then perform failover to new 9000 GW and then replace the primary 5000 GW similarly??
Hello
no it's not possible to have a cluster with different hardware (only Maestro allow this).
It's recommended that a cluster is made of identical hardware.
Client has to accept a minimum downtime during the replacement of the firewalls.
Hello
no it's not possible to have a cluster with different hardware (only Maestro allow this).
It's recommended that a cluster is made of identical hardware.
Client has to accept a minimum downtime during the replacement of the firewalls.
I know customer that did that, though I told them its 100% NOT supported. So yes, it would work, but just remember, its at your own risk, if any issues, opening TAC case would not do much, as they would certainly tell you the exact same thing.
I have also used this method with success.
Note that it's "not supported" as in tech support won't help you if you have trouble with it.
How well this works depends on the exact hardware. As long as they have the same number of cores, sync will work and you should get a stateful failover. Fewer cores can sometimes sync to more cores, but I haven't tested this enough to know the circumstances and limits. I would only try this with exactly the same version on all four members.
If you can't have an outage, you need to run it in a lab to build a process. I would then run the process to find any snags until you have at least three consecutive successes. For complicated things, I generally end up running it 10-15 times to build up a sort of muscle memory.
Let me clarify what I meant in my 1st response. It would work, BUT, not as far as cluster mechanism, only one fw at a time, so if thats something client is okay with, then yes.
Its always best to be 100% honest with people. If something is unsupported, they need to know it, so its really doing the process at their own risk, as long as they know if it does not work, help would be very limited.
Not supported, in later version you achieve this with ElasticXL. Now only Maestro can do this.
ElasticXL supports different models of Check Point appliances starting in these versions (Feature ID PMTR-118534):
I would just install the new cluster, put them in Smart Console. Clone the policy and put this policy on the new gateways. Do the cutover on switch level.
ClusterXL , what you use now explains it this way:
ClusterXL operation completely relies on internal timers and calculation of internal timeouts, which are based on hardware clock ticks.
Therefore, in order to avoid unexpected behavior, ClusterXL is supported only between machines with identical CPU characteristics.
Not only the hardware, software is also a challenge!
ClusterXL is supported only between identical operating systems - all Cluster Members must be installed on the same operating system).
ClusterXL is supported only between identical Check Point software versions - all Cluster Members must be installed with identical Check Point software, including OS build and hotfixes.
All Check Point software components must be the same on all Cluster Members. Meaning that the same Software Blades and features must be enabled on all Cluster Members:
status on all Cluster Members must be the same (either enabled, or disabled)
Number of CoreXL
Officially, it is not supported to use different gateway models in a ClusterXL cluster unless you are doing "mix and match" in a Maestro Security Group.
Unofficially, it is possible to create a valid cluster with different hardware models if you turn off Dynamic Split and configure manual static splits with the same number of Firewall Worker instances. The number of Dispatchers/SNDs on each gateway does not need to match, as the SecureXL state is not synced between members. So, for example, if you have a 16-core gateway and a 24-core gateway, with split settings of 4/12 and 12/12, respectively, the cluster will sync up and work fine. However, I would not recommend leaving it in this state for a prolonged period outside of a lab environment, as you may encounter unexpected behavior.
Once this is done, Multi-Version Cluster (MVC) can be enabled to force gateways running different code levels to successfully sync their state with each other, and ensure no connections are lost during the initial failover from the older code to the newer code. Prior to the availability of MVC, one trick I would use to blunt the effect of a non-synced failover from the old code onto the new code was to disable "drop out of state TCP packets" in the Global Properties under Stateful Inspection...Out of State packets. This would be done prior to starting the upgrade and the policy reinstalled to the cluster. Once the non-stateful failover occurs onto the new code, you'll need to "exercise" any existing rarely-used data connections as part of your test plan, and these connections will be resurrected back into the state table of the new gateway, since out of state TCP drops are off. Just don't forget to turn this back on once the upgrade is complete!
However there is a new kernel variable added in the latest Jumbo HFAs for R81.20 and R82+ called fwha_allow_different_corexl_instances which apparently if set to 1 (default is 0) will allow a cluster to sync up and work with a different number of Firewall Worker instances on the members. I assume this is for Maestro "mix and match"; I have no idea if this will work with a ClusterXL HA cluster as documentation on this new variable is pretty sparse.
Leaderboard
Epsum factorial non deposit quid pro quo hic escorol.
| User | Count |
|---|---|
| 34 | |
| 11 | |
| 10 | |
| 10 | |
| 9 | |
| 8 | |
| 7 | |
| 7 | |
| 6 | |
| 6 |
Thu 30 Apr 2026 @ 03:00 PM (PDT)
Hillsboro, OR: Securing The AI Transformation and Exposure ManagementTue 12 May 2026 @ 10:00 AM (CEST)
The Cloud Architects Series: Check Point Cloud Firewall delivered as a serviceThu 30 Apr 2026 @ 03:00 PM (PDT)
Hillsboro, OR: Securing The AI Transformation and Exposure ManagementAbout CheckMates
Learn Check Point
Advanced Learning
YOU DESERVE THE BEST SECURITY