Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Mattias_Jansson
Collaborator

Controlled failovers during a VSX vsls MVC upgrade

Hi!

Im about to perform a R80.30 2.6 kernel to R81.10 vsx vsls MVC upgrade (with clean install)
It is a three node cluster with seven virtual servers.

During the maintenance windows all vs's can run on one host if necessary.

I would like to have control over the failovers when the upgraded vsx-gateways become active.

Are there any best practises or recommended (and supported) ways to do that?

Is vsx_util vsls supported during a MVC upgrade?
Or should I run clusterXL_admin down -p to avoid the vs's return to the newly upgraded host?

Thanks in advance

/Mattias

0 Kudos
4 Replies
_Val_
Admin
Admin

My understanding is, you can clean install the new VSX members, run vsx_util reconfigure to rebuild it, and then use MVC without any issue. 

I am not sure I understand your question about vsx_util vsls. What is your intention with it during the upgrade?

 

And of course, admin down command is an ultimate tool to avoid a failover, VSX or otherwise 🙂

0 Kudos
Mattias_Jansson
Collaborator

As I understand it:
Checkpoint recommends to use vsx_util vsls to distribute all vs's to M1 before the upgrade.
Then upgrade M3 and M2 and finally run cpstop on M1 to do the failover to the upgraded gateways.
But as I understand it: When M1 is upgraded and I install the policy, all VS's will be active on M1 as it has higher priority.
And in that case I will have two failovers instead of one. 
So my question is: What is the preferred method to avoid that?

0 Kudos
_Val_
Admin
Admin

I do not think your statement "When M1 is upgraded and I install the policy, all VS's will be active on M1 as it has higher priority" is correct. Please read R81.10 Installation and Upgrade Guide for more dfetails.

0 Kudos
Mattias_Jansson
Collaborator

Hi again.
I performed the the upgrade on saturday and of course I followed the great guide. Below is my reflections.
Here is how we did it:
We ran vsx_util vsls and placed our three most critical vs's on M1. The next most critical vs on M2 and the other tree vs's on M3.
After clean install on M3 I made sure to install latest JHF and apply all required settings before running vsx_util reconfigure and reboot.
Next step was to enable MVC mechanism. And directly the member became active all three vs's on M3 became active. (The failovers worked perfectly with no downtime)
It would be nice if the guide would say that failover occurs after enabling MVC. (I thought it would do that after pushing the policy on M3)

M2 worked exactly as M3.

On M1 it was a different experience. After clean install, JHF and adding settings we ran vsx_util reconfigure and reboot.
When it was up again all critical VS's returned to the newly upgraded M1.

As I need to change the CoreXL and MultiQ setup I needed to reboot the member twice before it was correctly setup.
Which caused two extra failovers.

During the upgrade though we didn't have any downtime which is really impressive, great work Check Point!

I think that the way I should have handled the M1 not beeing active after vsx_util reconfigure would have been to disable some interfaces on it. In that case the vs's should have stayed at M2.


0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events