- CheckMates
- :
- Products
- :
- Quantum
- :
- Security Gateways
- :
- Bond interfaces and HA failover
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Are you a member of CheckMates?
×- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Bond interfaces and HA failover
A question about how many physical ports are down before a bond is shown down.
In the examples for how to build bond groups it show two interface been bonded and when issuing a 'cphaprob show_bond' (show cluster bond all from clish) it shows 2 interfaces configures and interfaces required been 1.
S
uch as;
Bond name | Mode | Sate |Slaves configured |Slaves link up |Slaves required
bond100.100 | Load Sharing | UP | 2 | 2 | 1
bond110.200 | Load Sharing | UP | 4 | 4 | 3
This from my testing means that a single interface been down will not show the bond interface as down but as 'UP!' - Bond interface us UP yet attention required.
Bond name | Mode | Sate |Slaves configured |Slaves link up |Slaves required
bond100.100 | Load Sharing | UP ! | 2 | 1 | 1
bond110.200 | Load Sharing | UP | 4 | 4 | 3
I also have a bond group configured using four interfaces, so when running the command it shows as four up but three required. Is there anyway to change the number of required interfaces, so the device would stay active with two interfaces down, so 'required' been 2 rather than 3?
Bond name | Mode | Sate |Slaves configured |Slaves link up |Slaves required
bond100.100 | Load Sharing | UP | 2 | 2 | 1
bond110.200 | Load Sharing | DOWN | 4 | 2 | 3
Searching the knowledge base is drawing a blank for me, anyone know of a SK that could assist?
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks, I'll have a look at that.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Well, the gateways accept the command and the cpha_bond_ls_config.conf looks correct.
However 'cphaprob show_bond' still shows configured 4 up 4 required 3, after a policy push.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The question is - is it working then or not? (even it's still showing wrong number of required slaves)
What is your software/JHF version?
If you feel that everything seems to be properly configured and it still doesn't work as expected then TAC should be involved.
BR
Daniel.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Show version all;
Product version Check Point Gaia R81.10
OS build 335
OS kernel version 3.10.0-957.21.3cpx86_64
OS edition 64-bit
cpinfo;
==============================================
General Info
==============================================
OS: Gaia
Version: R81.10 - Build 883
kernel: R81.10 - Build 793
Type: GW cluster
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
What about JHF? (cpinfo -y all).
BR
Daniel.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
it looks like the initial policy push I did, didn't make the change 'live'. I've just re-pushed the policy after making a non related change and can now see the following;
Bond name | Mode | Sate |Slaves configured |Slaves link up |Slaves required
bond100.100 | Load Sharing | UP | 2 | 2 | 1
bond110.200 | Load Sharing | DOWN | 4 | 2 | 2
Thanks for your help Daniel. looks like I'm behind a HFA so next weeks job will be raising the changes for patching the SC's and gateways on the live systems.
