Hi,
running R80.30 take 226 (upgrading to R81.10 in a few weeks)...
after performing a (vsx_util vsls) failover on a vsx cluster this evening, i remarked the following...
All vsenv's are active on node 2. But on top of the cphaprob output it still shows the node 1 as active. I did not find anything else wrong. All vsenv's were running happily, so i'm looking on how i need to interpret these two lines...
After some digging, i found this only happens on vsenv 0. All other vsenv's show a cphaprob stat where node 2 is active.
So this looks as : expected altough confusing... Is there however a way to migrate this to node 2 also? Or is this even not necessary? What's the logic behind this? Funny enough, when you perform a clusterXL_admin down on node 1, the active line moves to node 2. But the reverse also happens, when you then up node 1, it also becomes again the active node. So there are some processes which are preferably running on the first node. Right?
[Expert@gw-VSX-2:0]# cphaprob stat
Cluster Mode: Virtual System Load Sharing (Primary Up)
ID Unique Address Assigned Load State Name
1 192.168.242.5 100% ACTIVE gw-VSX-1
2 (local) 192.168.242.6 0% STANDBY gw-VSX-2
and :
Cluster name: gw-VSX
Virtual Devices Status on each Cluster Member
=============================================
ID | Weight| gw-V| gw-V
| | SX-1 | SX-2
| | | [local]
-------+-------+-----------+-----------
1 | 10 | STANDBY | ACTIVE
2 | 10 | STANDBY | ACTIVE
3 | 10 | STANDBY | ACTIVE
7 | 10 | STANDBY | ACTIVE
8 | 10 | STANDBY | ACTIVE
---------------+-----------+-----------
Active | 0 | 5
Weight | 0 | 50
Weight (%) | 0 | 100