Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
FtW64
Contributor
Jump to solution

Zero-downtime migration/upgrade to new hardware (ClusterXL Active/Standby)

I'm looking for a procedure (high level action plan) to migrate a 'simple' ClusterXL HA cluster (one Smart-1 and two members in Active/Standby) to new hardware (and preferably also upgrade from R81.10 to R81.20 at the same time), using a zero-downtime (hence: stateful) migration/upgrade.

The old hardware consists of two SG5600 appliances. The new hardware consists of two SG6400 appliances. Besides the management interface, both environments will be connected using two 10 Gbit/s SR fiber connections (per appliance) using the built-in 4x10G adapter card.

Of course I can simply build the new cluster on the new hardware and then disconnect network interfaces from the old cluster, connect the new cluster to the network, force some gARPs, and test. But that will be a non-stateful 'failover' to the new cluster.

Is it possible to perform such a zero-downtime upgrading/migrating to new hardware with Check Point? I think the major challenge is that at some point in the process you have a cluster consisting of one SG5600 and one SG6400. That's probably not supported (or might not even work).

0 Kudos
3 Solutions

Accepted Solutions
_Val_
Admin
Admin

Different appliances in the same cluster are 100% not supported, and 99.9% won't work. Your best bet is to configure the new cluster, install policy on it and re-cable. This is a non-zero downtime, though.

View solution in original post

0 Kudos
FtW64
Contributor

Hi Val,

I was afraid of that. We'll go for the non-stateful migration plan instead.

Thanks for the quick response!

View solution in original post

0 Kudos
CheckPointerXL
Advisor
Advisor

I always perform migration building a temporary cluster with different hw and 90% of times i got statefull failover

Only requirement is to have new FW with higher or equal number of CoreXL

Sometimes strange things can happen (misaligned dynamic balancing, corexl mismatch etc.) But, despite Active/Down state, session ar synced and trasparent failover will happen

View solution in original post

0 Kudos
4 Replies
_Val_
Admin
Admin

Different appliances in the same cluster are 100% not supported, and 99.9% won't work. Your best bet is to configure the new cluster, install policy on it and re-cable. This is a non-zero downtime, though.

0 Kudos
FtW64
Contributor

Hi Val,

I was afraid of that. We'll go for the non-stateful migration plan instead.

Thanks for the quick response!

0 Kudos
Daniel_Kavan
MVP Gold
MVP Gold

It's tempting to replace the hardware of one member (standby 6900) then bring in your new HA appliance (9300).  Then, do the active member.

 

So, building a temporary cluster ok got it.   But then, when you go live you'd have to 

1. make the new temp cluster permanent

or 

2. a. remove each 6900 cluster member from the existing cluster; use cpconfig to make that appliance NOT in a cluster. remove each 9300 cluster member from the temp cluster via cpconfig remove from cluster  c. add both into the existing cluster.   

From experience I made the temp cluster my new cluster.   that resulted in a lot of cleanup/mess/extra work

0 Kudos
CheckPointerXL
Advisor
Advisor

I always perform migration building a temporary cluster with different hw and 90% of times i got statefull failover

Only requirement is to have new FW with higher or equal number of CoreXL

Sometimes strange things can happen (misaligned dynamic balancing, corexl mismatch etc.) But, despite Active/Down state, session ar synced and trasparent failover will happen

0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events