Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Andrew_Tindall
Contributor
Jump to solution

Bond interfaces and HA failover

A question about how many physical ports are down before a bond is shown down.

 

In the examples for how to build bond groups it show two interface been bonded and when issuing a 'cphaprob show_bond' (show cluster bond all from clish) it shows 2 interfaces configures and interfaces required been 1.

S

uch as;

Bond name   | Mode               | Sate |Slaves configured |Slaves link up |Slaves required
bond100.100 | Load Sharing | UP    | 2                             | 2                       | 1
bond110.200 | Load Sharing | UP    | 4                             | 4                       | 3

 

This from my testing means that a single interface been down will not show the bond interface as down but as 'UP!' - Bond interface us UP yet attention required.

Bond name   | Mode               | Sate |Slaves configured |Slaves link up |Slaves required
bond100.100 | Load Sharing | UP !   | 2                             | 1                       | 1
bond110.200 | Load Sharing | UP    | 4                             | 4                       | 3

 

I also have a bond group configured using four interfaces, so when running the command it shows as four up but three required. Is there anyway to change the number of required interfaces, so the device would stay active with two interfaces down, so 'required' been 2 rather than 3?

Bond name   | Mode               | Sate         |Slaves configured |Slaves link up |Slaves required
bond100.100 | Load Sharing | UP           | 2                             | 2                       | 1
bond110.200 | Load Sharing | DOWN    | 4                             | 2                       | 3

 

Searching the knowledge base is drawing a blank for me, anyone know of a SK that could assist?

0 Kudos
7 Replies
Andrew_Tindall
Contributor

Thanks, I'll have a look at that.

0 Kudos
Andrew_Tindall
Contributor

Well, the gateways accept the command and the cpha_bond_ls_config.conf looks correct.

 

However 'cphaprob show_bond' still shows configured 4 up 4 required 3, after a policy push.

 

0 Kudos
Daniel_Szydelko
Advisor
Advisor

The question is - is it working then or not? (even it's still showing wrong number of required slaves)

What is your software/JHF version?

If you feel that everything seems to be properly configured and it still doesn't work as expected then TAC should be involved.

BR

Daniel.

0 Kudos
Andrew_Tindall
Contributor

Show version all;


Product version Check Point Gaia R81.10
OS build 335
OS kernel version 3.10.0-957.21.3cpx86_64
OS edition 64-bit

 

cpinfo;

==============================================
General Info
==============================================
OS: Gaia
Version: R81.10 - Build 883
kernel: R81.10 - Build 793
Type: GW cluster

0 Kudos
Daniel_Szydelko
Advisor
Advisor

What about JHF? (cpinfo -y all).

BR

Daniel.

0 Kudos
Andrew_Tindall
Contributor

it looks like the initial policy push I did, didn't make the change 'live'. I've just re-pushed the policy after making a non related change and can now see the following;

Bond name   | Mode               | Sate         |Slaves configured |Slaves link up |Slaves required
bond100.100 | Load Sharing | UP           | 2                             | 2                       | 1
bond110.200 | Load Sharing | DOWN    | 4                             | 2                       | 2

 

Thanks for your help Daniel. looks like I'm behind a HFA so next weeks job will be raising the changes for patching the SC's and gateways on the live systems.

0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events