Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Daniel_Kavan
Advisor
Advisor
Jump to solution

ClusterXL - moving to newer appliances

Hi mates,

I'm moving from 6900 to 9300 appliances.   

Option A

I plan to add one of the 9300 appliances in as a 3rd member.    Then, remove one of the 6900s as a member.   None of these licenses specify active vs standby node (HA).    Will I be able to push policy to the three members ok?   This is clusterXL not the newer ElasticXL.  Then, do the same for the other 6900.

 

Option B

Push policy to the new cluster.  Shutdown the old cluster.   

 

0 Kudos
2 Solutions

Accepted Solutions
Bob_Zimmerman
Authority
Authority

I would just shut down one of the 6900s, build the new 9300, establish SIC, push policy (should work, as long as they're running the same version), then fail over (this will involve a hard outage, since the 6900 and 9300 almost certainly won't sync) and repeat the process.

You could add it as a third member, but that seems like a lot of headache to me for very little benefit. That method would also involve a hard outage when you fail to a 9300 for the first time.

View solution in original post

the_rock
Legend
Legend

I would 100% go with option B, sounds way safer to me. Yes, option 1 would PROBABLY work, but bit risky and most likely not supported.

Andy

View solution in original post

9 Replies
Bob_Zimmerman
Authority
Authority

I would just shut down one of the 6900s, build the new 9300, establish SIC, push policy (should work, as long as they're running the same version), then fail over (this will involve a hard outage, since the 6900 and 9300 almost certainly won't sync) and repeat the process.

You could add it as a third member, but that seems like a lot of headache to me for very little benefit. That method would also involve a hard outage when you fail to a 9300 for the first time.

JozkoMrkvicka
Authority
Authority

6900 and 9300 cannot sync each other ? Even if both are running the same version and Jumbo ? At least R81.20 minimal Take 14 where MVC is enabled by default ?

Not sure how sync works between cluster members running different Firewall Modes (USFW vs. KSFW) and different SecureXL Modes (UPPAK vs. KPPAK).

Different number of cores shouldnt be an issue with enabled MVC.

Kind regards,
Jozko Mrkvicka
0 Kudos
Bob_Zimmerman
Authority
Authority

I only tested cross-CoreXL-topology sync with MVC back in the R80.20 days, and it did not allow machines with different CoreXL topologies to sync at the time. If that has changed, it wouldn't exactly surprise me, but this is the first I've heard about it.

0 Kudos
the_rock
Legend
Legend

I would 100% go with option B, sounds way safer to me. Yes, option 1 would PROBABLY work, but bit risky and most likely not supported.

Andy

Daniel_Kavan
Advisor
Advisor

I've done option B and I'll go with that again.  But I think option A would work fine too.   Thanks all.

0 Kudos
the_rock
Legend
Legend

I just checked and says that gateways of different hardware models are NOT supported. ie you cant mix and match say 6900 and 9300 in clusterXL config...would it work? Maybe, but why even bother if its not officially supported? : - )

Andy

0 Kudos
Daniel_Kavan
Advisor
Advisor

So, my current cluster object  ROCK_on with the 6900s is used in 103 objects and 143 policies.    If I  rename my old cluster object  ROCK_on to ROCK_off, then those objects and polices will all be set to Rock_off which doesn't help.        Then, if I rename my new blackROCK cluster object to ROCK_on it won't accomplish anything.   I will need to manually change all those objects and policies to blackROCK?

0 Kudos
Bob_Zimmerman
Authority
Authority

Not necessarily. The Where Used dialog has a "Replace" button in the upper right. Hit that and you can replace the old object with the new object in rules and group memberships. You'll still have some manual cleanup to do, though. This can't replace the cluster in a policy package's Installation Targets, in threat prevention update schedules, and a bunch of other places.

This is part of why I really dislike replacing a cluster with a whole new cluster. I greatly prefer having the new members take over the old objects. There's a lot less to go wrong.

the_rock
Legend
Legend

Good point about replacing.

0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events