Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Kaloyan_Metodie
Participant

BGP issues after upgrade 80.10 -> 80.30

Hi all 
We are stuck into a strange issue when upgrading a cluster from 80.10 to 80.30 
Short description: 

We have two 80.10 GW appliances, facing two internet connections with BGP, advertising one /24 with equal metric via both providers. 

Both BGP sessions are Established on the primary cluster member (confirmed in HA and LS mode). 

After upgrading to 80.30 one of the BGP comes up without issues, the other stays in Active state. 

routed.log says: interface eth1 has NO IPv4 CLUSTER address

Error is logged even though cluster addresses are properly configured and the BGP won't move to Established state. 

Shutting down the working BGP (disable interface) and waiting for the other to come up did not help/ 

We tested this on 4600 appliances then did the config from scratch on a brand new 6400 - same issue. 

I would appreciate any suggestions 🙂 

 

 

0 Kudos
13 Replies
PhoneBoy
Admin
Admin

Kaloyan_Metodie
Participant

Yep, and the result of cphaprob -a if is looking good. 

0 Kudos
PhoneBoy
Admin
Admin

TAC case is probably in order then:

Kaloyan_Metodie
Participant

We scheduled a meeting with TAC for tonight as this is impacting prod firewalls and downtime is a bit tricky. 
I was hoping that someone ran into the same issue and could help reduce the time to resolve it. 
Will share results after debug digging 
Anyways - thank you for the reply 🙂  

0 Kudos
Sundeep_Mudgal
Employee
Employee

BGP is not supported on non-clustered interfaces in a clustered environment. Thanks for checking routed.log. If this is a clustered environment and eth1 is not configured with cluster VIP then please configure it.  If eth1 is configured with clustered VIP then please check the output of:

 

cphaprob -a if ---> this should show whether VIP is configured and installed. 

 

show routed cluster-state detailed -------> this should show whether routing daemon has the VIP. 

Dilian_Chernev
Collaborator

Hi, I was involved also in debugging this issue and we ran the both commands.

cphaprob -a if - shows that eth1 exists and VIP is configured and installed, also VIP was accessible from outside world.

show routed cluster-state detailed - eth1 is missing from here. Only 3 from 4 VIP interfaces were shown here.

We had a remote session with TAC and issue was resolved, but it was not very clear what was the problem a how it was resolved.
The last thing that we do before resolving, was aligning host name of the machine and object name in policy.
After rebooting the device, BGP sessions to both providers were established and working.

I am still curious what could be the reason for VIP address missing in routed configuration and how to fix it

Thanks 

 

 

0 Kudos
JanVC
Collaborator

the new sk171555 looks alot like your issue

0 Kudos
Sundeep_Mudgal
Employee
Employee

Most likely cluster did not update the routing daemon with the VIP. This usually happens when policy is not pushed. I assume you pushed the policy. sk171555  explains how to resolve the issue.

Since there were 3 VIPs out of 4 in routing daemon so could it be possible that eth1 was configured later?  

 

0 Kudos
Kaloyan_Metodie
Participant

Hi, as Dilian noted - the only change we did in order to have it up and running was aligning the hostname with it's object name 

Still not sure why it only affected one o the two bgp sessions but now it works like charm.. 

0 Kudos
Dilian_Chernev
Collaborator

It seems sk171555  is based on our issue 😀
Unfortunately, I am not 100% sure that this procedure has solved the issue, as we have done it before opening the ticket.
Also it didn't work when support guy told us to do it again, but at the end we have a working cluster with bgp.

eth1 was configured on time of upgrade, also we build a new cluster object with new devices (but same Cluster IPs) and the issue was the same.

0 Kudos
JanVC
Collaborator

I see the sk has been updated yesterday, the first iteration had your full public IP address visible for everyone

Sundeep_Mudgal
Employee
Employee

Dilian,

 

 In that case I will take this up with clustering team as clustering module is supposed to update routing daemon for all VIPs. Can you please open a SR as well so support can try to reproduce the issue inhouse?

0 Kudos
Dilian_Chernev
Collaborator

Thank you @Sundeep_Mudgal , but the issue is currently resolved and cannot reproduce the the problem.

We have opened a SR and can send you the number to review the communication, logs and debugs provided.

0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events