Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
artem_kruhlyi
Contributor

VSX Cluster consistency after upgrade from R77.30 to R80.30

Hi all,

There is a brief setup explanation:

We have a VSX Cluster in VSLS mode which consists of two nodes. The cluster is built on top of Open Server deployed on HP ProLiant DL380 Gen9. There are VS-s and VSW-s spun up. Initially, the cluster was in broadcast mode and we didn't recognize any errors or warnings before upgrade. The cluster was upgraded from R77.30 to R80.30 last Saturday. We did Zero Downtime Upgrade of a VSX Cluster and followed the procedure. Everything went as expected except the fact that the main node started to show a warning (CPFW01_cphaprob.txt - in the attachments).

 

There is a summary of our troubleshooting steps:

  1. After we got a warning, we did "cphaprob stat" on the second node (CPFW02_cphaprob.txt - in the attachment).
  2. We did "cphaprob list" on the firs node to figure out the problem (CPFW01_cphaprob_list.txt - in the attachments). We found that the PNOTE itself indicates that there is an issue with the Interface monitoring:

                        Device Name: Interface Active Check

  1. We did "cphaprob -a if" on both nodes (CPFW01_cphaprob_if.txt and CPFW02_cphaprob_if.txt  - in the attachments) and found that "Required interfaces" parameter is different on both nodes. It is 3 on the first and 2 on the second.
  2. We did "cpstat ha -f all" on both nodes (CPFW01_cpstat.txt and CPFW02_cpstat.txt  - in the attachments). Both nodes have identical Interface tables.
  3. We checked physical interfaces state and configuration on both nodes. The state and configuration are identical.
  4. We checked bond interfaces state and configuration on both nodes. The state and configuration are identical.
  5. We checked VLAN configuration on both nodes. It's identical.
  6. We checked connectivity between nodes and everything is OK.
  7. We switched the cluster from broadcast mode to auto and it's in unicast mode with the same warning now.
  8. We googled this warning and tried to do some other troubleshooting steps mentioned in different SK-s.
  9. Also we checked the configuration on the VSX Cluster object, under the configuration for “Physical Interfaces”, and found one suspicious interface included. It's bond3.xx70. It's not supposed to be used as trunk. However, we haven't changed anything yet because bond3.xx70 is DMI and we don't want to lose our MDM behind it, if something will go wrong.
  10. TAC case was created.

The warning is still there.

 

Does somebody have an idea how this can be resolved?

0 Kudos
0 Replies

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events