Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
RPawar
Contributor
Jump to solution

Can we do mix and match between 5000 and 9000

Hello Guys,

 

i have a query here and would be grateful if you guys assist me

 

we are planning to replace the current 5000 cluster setup with new 9000 cluster setup

client wants it to be done in phase wise way to minimise the over all downtime hence my question here is that is it supported in non maestro setups to do mix and match 

like can we replce the secondary 5000 GW with new 9000 GW reset the sic and then deploy policy on cluster and then perform failover to new 9000 GW and then replace the primary 5000 GW similarly??

0 Kudos
1 Solution

Accepted Solutions
simonemantovani
MVP Silver
MVP Silver

Hello

no it's not possible to have a cluster with different hardware (only Maestro allow this).

It's recommended that a cluster is made of identical hardware.

Client has to accept a minimum downtime during the replacement of the firewalls.

View solution in original post

9 Replies
simonemantovani
MVP Silver
MVP Silver

Hello

no it's not possible to have a cluster with different hardware (only Maestro allow this).

It's recommended that a cluster is made of identical hardware.

Client has to accept a minimum downtime during the replacement of the firewalls.

the_rock
MVP Diamond
MVP Diamond

I know customer that did that, though I told them its 100% NOT supported. So yes, it would work, but just remember, its at your own risk, if any issues, opening TAC case would not do much, as they would certainly tell you the exact same thing.

Best,
Andy
"Have a great day and if its not, change it"
CaseyB
Advisor

I have also used this method with success.

Bob_Zimmerman
MVP Gold
MVP Gold

Note that it's "not supported" as in tech support won't help you if you have trouble with it.

How well this works depends on the exact hardware. As long as they have the same number of cores, sync will work and you should get a stateful failover. Fewer cores can sometimes sync to more cores, but I haven't tested this enough to know the circumstances and limits. I would only try this with exactly the same version on all four members.

If you can't have an outage, you need to run it in a lab to build a process. I would then run the process to find any snags until you have at least three consecutive successes. For complicated things, I generally end up running it 10-15 times to build up a sort of muscle memory.

0 Kudos
the_rock
MVP Diamond
MVP Diamond

@RPawar 

Let me clarify what I meant in my 1st response. It would work, BUT, not as far as cluster mechanism, only one fw at a time, so if thats something client is okay with, then yes.

Best,
Andy
"Have a great day and if its not, change it"
0 Kudos
simonemantovani
MVP Silver
MVP Silver

I agree with @the_rock ... in my opinion, the best thing is always to follow recommendations and admin guide provided by the vendor, even if this means downtime.

for my experience when you change hardware like in your case, the approach suggested by @the_rock is the best and the most correct.

the_rock
MVP Diamond
MVP Diamond

Its always best to be 100% honest with people. If something is unsupported, they need to know it, so its really doing the process at their own risk, as long as they know if it does not work, help would be very limited.

Best,
Andy
"Have a great day and if its not, change it"
0 Kudos
Lesley
MVP Gold
MVP Gold

Not supported, in later version you achieve this with ElasticXL. Now only Maestro can do this.

ElasticXL supports different models of Check Point appliances starting in these versions (Feature ID PMTR-118534):

I would just install the new cluster, put them in Smart Console. Clone the policy and put this policy on the new gateways. Do the cutover on switch level. 

ClusterXL , what you use now explains it this way:

ClusterXL operation completely relies on internal timers and calculation of internal timeouts, which are based on hardware clock ticks.

Therefore, in order to avoid unexpected behavior, ClusterXL is supported only between machines with identical CPU characteristics.

Not only the hardware, software is also a challenge! 

ClusterXL is supported only between identical operating systems - all Cluster Members must be installed on the same operating system).

ClusterXL is supported only between identical Check Point software versions - all Cluster Members must be installed with identical Check Point software, including OS build and hotfixes.

All Check Point software components must be the same on all Cluster Members. Meaning that the same Software Blades and features must be enabled on all Cluster Members:

  • SecureXL

     status on all Cluster Members must be the same (either enabled, or disabled)

  • Number of CoreXL

    Lesley_1-1772815104874.gif Firewall instances on all Cluster Members must be the same
-------
Please press "Accept as Solution" if my post solved it 🙂
0 Kudos
Timothy_Hall
MVP Gold
MVP Gold

Officially, it is not supported to use different gateway models in a ClusterXL cluster unless you are doing "mix and match" in a Maestro Security Group.

Unofficially, it is possible to create a valid cluster with different hardware models if you turn off Dynamic Split and configure manual static splits with the same number of Firewall Worker instances.  The number of Dispatchers/SNDs on each gateway does not need to match, as the SecureXL state is not synced between members.  So, for example, if you have a 16-core gateway and a 24-core gateway, with split settings of 4/12 and 12/12, respectively, the cluster will sync up and work fine.  However, I would not recommend leaving it in this state for a prolonged period outside of a lab environment, as you may encounter unexpected behavior.

Once this is done, Multi-Version Cluster (MVC) can be enabled to force gateways running different code levels to successfully sync their state with each other, and ensure no connections are lost during the initial failover from the older code to the newer code.  Prior to the availability of MVC, one trick I would use to blunt the effect of a non-synced failover from the old code onto the new code was to disable "drop out of state TCP packets" in the Global Properties under Stateful Inspection...Out of State packets.  This would be done prior to starting the upgrade and the policy reinstalled to the cluster.  Once the non-stateful failover occurs onto the new code, you'll need to "exercise" any existing rarely-used data connections as part of your test plan, and these connections  will be resurrected back into the state table of the new gateway, since out of state TCP drops are off.  Just don't forget to turn this back on once the upgrade is complete!

However there is a new kernel variable added in the latest Jumbo HFAs for R81.20 and R82+ called fwha_allow_different_corexl_instances which apparently if set to 1 (default is 0) will allow a cluster to sync up and work with a different number of Firewall Worker instances on the members.  I assume this is for Maestro "mix and match"; I have no idea if this will work with a ClusterXL HA cluster as documentation on this new variable is pretty sparse.

New Book: "Max Power 2026" Coming Soon
Check Point Firewall Performance Optimization

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events