Kaspars Zibarts

R77.30 VSX appliance upgrade to R80.10

Discussion created by Kaspars Zibarts on Aug 30, 2017
Latest reply on Sep 6, 2018 by Kaspars Zibarts

Hey! 

 

We just started lab testing for these and hit the road block from the start.

 Firstly CP documentation is very ambiguous - in the same R80.10 installation and upgrade document one section says that:

  • CPUSE upgrade is OK (read Upgrading Security Management Servers and Security Gateways, Upgrading a VSX Gateway section). It says that after vsx_util upgrade part you can skip vsx reconfigure (= fresh install) if you used CPUSE. This theory is confirmed by actual CPUSE verifier on R77.30 gateway via CLI - it confirms that upgrade is OK.
  • Only clean install (read Upgrading ClusterXL Deployments, Connectivity Upgrade, Upgrading VSX High Availability Cluster) says "Upgrade the Standby cluster member with a clean install"..

 

Now I have tried both using CPUSE and both failed. And the reason is that interface naming script is changed from appliance specific (in our case 5900 - /etc/udev/rules.d/00-PL-40-00.rules) to generic open server one /etc/udev/rules.d/00-OS-XX.rules! So extension slot interfaces are not called eth1-0x anymore!

Very odd!

 

I reverted the same appliance to pre-VSX state on R77.30 and run clean install of R80.10 using CPUSE and that all worked OK with interface names assigned correctly using appliance specific script.

 

This basically means that CPUSE is useless for VSX upgrades and we go back to old-school method of full re-install from ISO? Am I correct? Has anyone done VSX appliance upgrades and what was your approach?

 

Will be testing the same with open servers later today.

Outcomes