Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Matt_Taber
Contributor

Merging 2 FW clusters into 1 - Advice Requested

Good afternoon,

We're in the early stages of decommissioning / merging 2 (small) datacenters into 1.    We'd like to decommission 1 of the CP HA clusters and move all networks/rules/NAT/VLAN, etc onto the remaining CP HA cluster.

Has anyone done anything like this or have any advice they'd like to offer?

Items we have already considered:

-Turn down VLANs/networks on to-be-decommissioned cluster, bring up on other. (In GAIA)

-Document topology of to-be-decommissioned cluster to apply it to remaining cluster in next step.

-Pull/Update topology on remaining cluster (SmartDashboard)

-Ensure VLANs are built into switching infrastructure to handle both datacenter VLANs

-Modify NATs to install on the remaining cluster vs the cluster that is going away.

-Move static routes to remaining cluster

Is there a way to script the turn-up of new interfaces/VLANs on the remaining cluster to minimize downtime of the datacenter equipment that is having its cluster decommissioned?

0 Kudos
8 Replies
Vladimir
Champion
Champion

Sure thing: simply write the Gaia configuration changes in the text file and paste it into your target cluster's members.

Once executed and "save config", you'll have to perform "Get Interfaces" in the cluster's properties and manually configure vIPs and topology of the newly imported interfaces.

The objects and rules could be pre-configured and rules disabled in advance.

Once cluster configuration is changed, enable the rules and push the policy.

You may consider cloning the policy package before introducing the changes for the fallback capability.

Danny
Champion Champion
Champion

May I ask if both clusters are centrally managed by a dedicated SmartCenter Server (Distributed Deployment) or are both clusters use Full-HA and maintain their own SmartCenter Server?

0 Kudos
Matt_Taber
Contributor

Both are centrally managed by SmartCenter Servers in MGMT HA.

0 Kudos
Vladimir
Champion
Champion

If you are using a single policy package with different installation targets, it is even easier.

If you have different policy packages for each cluster, you'll have to copy rules from one to another and replace old cluster with new wherever present.

0 Kudos
Matt_Taber
Contributor

1 policy package for all installation targets (don't ask).

0 Kudos
Vladimir
Champion
Champion

Well, in this case, it is not a bad thing, actually, since the rules are simply going to be applied to a single target cluster (you'll have to change the installation targets for the policy package to specify it).

So simply copy the configs from the old cluster members, reproduce pertinent network config steps outlined in my previous post, power down old cluster, wait for ARP cache on the switches to time out and you should be good to go.

Do clone the policy package before embarking on it as well as backup target cluster's Gaia configuration though for the recovery options.

If you want to play it extra safe, snapshot every component, export and download the resultant files offline.

*do not use any special characters or spaces in the names and DESCRIPTIONS of the snapshots.

0 Kudos
_Val_
Admin
Admin

I would suggest VSX as an option.

Matt_Taber
Contributor

Good afternoon,

We pulled the trigger on collapsing 2 clusters down to one yesterday afternoon.  Everything went fairly smoothly, thank you everyone for tips and advice.  The plan worked out as we expected, minus a few NATs that need to get modified do to some policy routing we had/have in place.

1 item we ran into that we weren't expecting was after we pasted interface and VLAN configurations via CLISH, then "Get Interfaces" within Dashboard and installed policy, no Virtual IP addresses were showing up in the output of cphaprob -a if.  We attempted to work with CP support to no avail.

We manually added a new VLAN and interface to the cluster, did another "Get Interface" and -- there were 50+ changes showing up that needed to be published (this was after there were 200+ after 'Getting Interfaces' after the initial round of clish commands.   Something in the process of manually adding an interface in the GAIA web GUI (versus pasting in CLISH configurations) knocked something loose (har har).

Thanks again to all who provided feedback on this process.  We didn't have any connectivity loss on the cluster that remained, and an acceptable amount of downtime on the networks that were being moved.

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events