Create a Post
Showing results for 
Search instead for 
Did you mean: 

Migration R77.30 to 80.10 on new hardware

Hello everyone,

I'm planning to migrate my R77.30 Cluster (open server) to r80.10 and the same time on a new hardware and I'd like to ask your opinion.

Let me give you some details.

My actual cluster has 13 physical interfaces of 1Gbps and my new one has only 2 fiber optic and 4 ethernet of 1Gbps.

In my actual configuration i have many sub-interfaces corresponding to my VLANS.

For the new one I want to put all the vlans to my fiber optic ports (aggregation etc) except the management interface ant the cluster sync that I'll dedicate two ethernet ports.

So my question is, what is the better way to migrate my cluster with the minimum outage?

Here my two options that I thought:

Scenario 1 (little outage): 

1) Install the R80.10 to the new hardware and configure my ports with the new topology.

2) Shutdown the "passif" member of my cluster and integrate the new one which will be in "ready" state

3) go to the sms(r80.10) , establish new sic and then do "get interfaces" in order to pull the new topology of the new member.

4) Install policy

5) shutdown the old "Active" member

6) integrate the new passive member (repeat step 3 and 4)

7) test failover

7) Test flows to the vlans etc.

Scenario 2 (high outage):

1) Prepare the two new hardware with the new topology

2) stop the two members

3) integrate the new members in the cluster 

4) go to the sms(r80.10) , establish new sic and then do "get interfaces" in order to pull the new topology of the new members.

5) test failover

6) Test my flows to the vlans etc

I think that I forget some steps anf some precautions to take into consideration so I'd like to have your help on that.

I would be happy to receive any ideas or propositions in order to improve my scenarios or even to change completely.

Thank you in advance,


ps: the PBR is active

ps1: 40 active VPN ipsec tunnels

1 Reply

Try to first upgrade both members to R80.10. Wait around 1 month to see how it goes (no issues, outages, ...).

Once all green, go ahead and replace HW.

Scenario 1 is preffered, but there are couple of things you need to know:

You need to rename also VIP (Cluster interfaces). "Get interfaces" will get only data from node and you need to do it for cluster also. I already did Scenario 1 HW upgrade in the past, and we renamed ALL interfaces in Topology tab (including cluster and both nodes). Not sure how CP is handling this in case Cluster has assigned different name of interface as members.

You will not face "Ready" state. Once you will push policies (with unclicked option to install on both members), both nodes will be Active and you need to perform cpstop on "old" member. This was the case during my HW replacement.

Kind regards,
Jozko Mrkvicka
0 Kudos


Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events