Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
tsitarz
Explorer

Interface-Bonding without Downtime?

Hello,

we are running an Checkpoint Gateway Cluster on 80.30. We have a rather simple setup:

eth1 connects to internal
eth2 unused

eth5 connects to external
eth6 unused

The cluster syncs over an dedicated sync interface. Theres no mgmt interface.

now , on both devices, we want to bond eth1+eth2 and eth5+eth6 while keeping the IP config of the used Interface. Ideally without any downtime.

Our inital plan was to configure the Bonding on the passive device and switch it to active by doing an failover. But as i was writing my notes i noticed that i very probably wont be able to do an failover because, for instance, the eth1 cluster interface will be missing on one of the members after the bond was configured. But i also cannot take the interface out of the cluster (turn cluster interface to private) since the virtual IP is used as our internal def. gw.. 

The only way im seeing is to take 2 completely unused ports, bond & connect them physically and then during a much shorter downtime window take out eth1, and configure the IP to newly created bond. Then assign it as a cluster interface while unassigning eth1.

 

Is there a possibility how i could do this procedure without any downtime?

Or will the Failover work, if one of the cluster-interfaces is not existing on the passive device while i enter "clusterXL_admin down" on the active device?

I hope its clear what i mean, please let me know if i should clarify it more.

Thanks in Advance, have a nice day 🙂

0 Kudos
2 Replies
Chris_Atkinson
Employee
Employee

The Bonds can start with single interfaces/slaves, you can then move the IPs then bring the existing interfaces into the bonds.

Whilst you can minimize it some downtime is to be expected / planned for. Be sure to consider external factors such as ARP cache/timeout.

0 Kudos
Bob_Zimmerman
Advisor

I don't have a whole lot of SmartCenters available right this second to check a bunch of versions, but I know R81 lets you specify the individual interface names on each member which back a given cluster interface. I think that feature dates back to R80.20 or earlier.

Using this capability, you could mark a member admin down (to prevent early failovers), edit the cluster topology to set the cluster interfaces to be backed by the bonds on that member, set up the bonds on the CLI, push policy, then check sync. It should work, but you might have instability when you remove the admin-down pnote from the first-bonded member. If that all works and the cluster members both report healthy, you just force a failover and repeat the process on the other cluster member.

Depending on how sensitive you are to unexpected downtime, you can test it ahead of time in VMs. I don't think any VM platforms support link aggregation protocols between the virtual switch and the VMs, but you can have bonds with single member interfaces. Just use round-robin transmit link selection and no LACP.

0 Kudos