cancel
Showing results for 
Search instead for 
Did you mean: 
Create a Post

Migrating cluster from old to new hardware

Jump to solution

Hi,

We are finally replacing our FW cluster with old UTM appliances for 5600 appliances. I would like to keep the same names in the policy, but since the interface names change I would like to know what the best way is to migrate to the new appliances with minimal outage.

I was about to failover to HA -

- move cables from the Primary appliance to the new 5600 Primary appliance,

- migrate export of the policy. Then remove all references of the existing cluster from the policy and delete the whole cluster from the management server.

- create a new cluster with initially 1 member (the new primary 5600) establish SIC and configure cluster with all new interfaces - Add cluster to the rules where the old cluster was removed

- Remove cables from Old HA Firewall,while installing the policy to the new Primary

- connect new 5600 HA and add to the cluster (and install policy)

Any other (or better) recommendations for a smooth migration to the new hardware?

Or can I just delete 1 cluster member and add the new hardware with different interface names to the cluster object?

Many thanks.

0 Kudos
1 Solution

Accepted Solutions

Re: Migrating cluster from old to new hardware

Jump to solution

It's then really straight forward:

1. Start with standby node and migrate configuration of the node to the new node:

      I would do this by using actual clish config, editing it to change interface names (best with search/replace)

      Don't forget to do other config changes not represented in clish to the new node (scripts/fwkern.conf/etc)

2. Disconnect old standby from network

3. Conncet new standby to network

4. Reset sic through smartconsole

5. Change version/appliance type in general properties of cluster object

6. Change advanced configuration of interfaces in the topology/network management part to allow different interface names on both nodes. It doesn't matter if the cluster interface name is the new one or the old one during migration. But I prefer to have the new one already.

7. Install policy

8. Failover to new node with new version using cphastop on active node

9. Repeat above steps 1-7 to 2nd old node.

View solution in original post

6 Replies

Re: Migrating cluster from old to new hardware

Jump to solution

Is this a Full-HA Cluster (Management on same box as gateway)? Or is it a distributed environment (Management running on separate server)?

I am not sure as you are speaking of migrate export in your description.

If it is distributed, you can easily change the existing cluster object to have interface names adjusted to new names and also adjust appliance hardware type.

0 Kudos

Re: Migrating cluster from old to new hardware

Jump to solution

Hi Norbert,

It's a distributed environment. Only hardware replacement of the cluster, so should be straight forward. Management is running R80.10 existing cluster R77.20 and new hardware will be running R80.10.

It's only that we need to keep the outage window as small as possible.

My only concern is the change in interface names and how to make that as smooth as possible.

The migrate export is basically only a safety measure to have a good configuration of the policies to rollback to if the migration fails.

So short: Is it easier to remove the whole cluster object from the policy and create a new one with the new hardware/interface names or can I remove one Firewall object from the cluster and add a new one in with the new interfaces, although both cluster members than have different interface names.

Thanks again.

0 Kudos

Re: Migrating cluster from old to new hardware

Jump to solution

It's then really straight forward:

1. Start with standby node and migrate configuration of the node to the new node:

      I would do this by using actual clish config, editing it to change interface names (best with search/replace)

      Don't forget to do other config changes not represented in clish to the new node (scripts/fwkern.conf/etc)

2. Disconnect old standby from network

3. Conncet new standby to network

4. Reset sic through smartconsole

5. Change version/appliance type in general properties of cluster object

6. Change advanced configuration of interfaces in the topology/network management part to allow different interface names on both nodes. It doesn't matter if the cluster interface name is the new one or the old one during migration. But I prefer to have the new one already.

7. Install policy

8. Failover to new node with new version using cphastop on active node

9. Repeat above steps 1-7 to 2nd old node.

View solution in original post

Re: Migrating cluster from old to new hardware

Jump to solution

Thanks Norbert,

I'll give it a go.

I have done similar replacement s in the past from open server to appliance or Solaris to appliance. It was always straight forward, but somehow this one (maybe because it looks different in R80.10) it confused me.

Thanks a lot.

Jan

0 Kudos

Re: Migrating cluster from old to new hardware

Jump to solution

Any other considerations if using VRRP?

Thank You

0 Kudos
Admin
Admin

Re: Migrating cluster from old to new hardware

Jump to solution

It should be fairly similar steps.

0 Kudos