- CheckMates
- :
- Products
- :
- Quantum
- :
- Maestro Masters
- :
- Re: Maestro SG management interfaces bond
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Maestro SG management interfaces bond
Hello,
I have 2 MHO 140 Orchestrators in redundancy. Its running on r80.20 SP. I have already installed the latest hotfix during the time of installation. It's in a production environment deployed a week ago with 4 GW's. I have configured bond interfaces for all the uplink connections and its working fine.
I then created a bond for SG Management interfaces ( eth1/1/1 eth1- Mgmt1 and eth2/1/1 eth1-Mgmt2) and its working. I am able to establish SIC with the Management server with the same bond interface IP address of SG. You will be able to install the policy successfully once. However, after the successful installation of the policy, the Management Server will no longer be able to reach the IP address of SG which is configured in the bond interface.
Even, I am unable to browse the web GUI of SG. I then removed the bond interface and assigned new bond interface for SG mgmt and again the same issue. I am able to reach the mgmt server and able to establish SIC. Then after the policy installation, the mgmt server is unable to reach the SG IP address.
Then, I removed the bond interface of SG management interfaces and assigned an IP address on the physical Mgmt Interface of SG and it worked fine.
It seems like there is an issue if we bond the SG mgmt interfaces. It's working fine with the bonds created for uplink. Has anyone tried bonding the SG Mgmt Interfaces and tried installing policy from the Mgmt Server?
Thanks,
Nischit
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
I bonded as Magg at first, it works fine (SG is reachable) and I am able to establish SIC from the Mgmt server. But after installing the policy, SG becomes unreachable from the MGMT and vice versa. Then, I bonded with LACP but still the same issue.
Afterward, I removed the bond and assigned the IP on the physical interface then it works.
It is a GW, not a VSX.
Even I detached all the GWs and configured the FTW again from the Maestro web UI for all the attached GWs.
I strictly followed the admin guide as well.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Yes, before I deployed for a customer a dual site deployment, we had it configured as single site with 2 MHOs and had the mgmt1 of both MHOs bonded as magg.
How did you configure it? How/When was FTW run?
Is it a GW or VSX?
Which bonding mode are you using?
Did you strictly follow admin guide? Because there’s a chapter about it.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
I bonded as Magg at first, it works fine (SG is reachable) and I am able to establish SIC from the Mgmt server. But after installing the policy, SG becomes unreachable from the MGMT and vice versa. Then, I bonded with LACP but still the same issue.
Afterward, I removed the bond and assigned the IP on the physical interface then it works.
It is a GW, not a VSX.
Even I detached all the GWs and configured the FTW again from the Maestro web UI for all the attached GWs.
I strictly followed the admin guide as well.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
But at first glance I see where you went wrong, all eth1-xxx interfaces are on MHO1 while all eth2-xxx are on MHO2 so you need to build a bond between the eth1-Mgmt1 and eth2-Mgmt1
The way to build this would be like this procedure (I just found some typos in this part, sorry):
add bonding group 1 mgmt
set interface eth2-Mgmt1 state on
add bonding group 1 mgmt interface eth2-Mgmt1
set bonding group 1 mode active-backup
set interface magg1 ipv4-address 1.2.3.11 mask-length 26
set management interface magg1
delete interface eth1-Mgmt1 ipv4-address
add bonding group 1 mgmt interface eth1-Mgmt1
set bonding group 1 primary eth1-Mgmt1
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Yes, I am aware of this, Eth1-xxx interfaces are on MHO1 while all eth2-xxx are on MHO2. I did the same. However, while writing in this community, I mentioned eth1-mgmt1 and eth1-mgmt2. I created a bond for eth1-mgmt1 and eth2-mgmt1.
I followed the same steps as you mentioned earlier on but this didn't help so I posted over here.
The only issue is when I install the policy in mgmt server, the SG ip becomes unreachable.
I will try this again today and let you know. Thanks!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
If I do magg in the Checkpoint then what config should I do in the switch side. Cause if I do bond with lacp, I configure the port channel interface and configure protocol-mode lacp.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
With older JHF only A/S and XOR (static) are supported.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
We have installed the jumbo hot fix 191.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
As you told that you also configured bond using xor. So, what did you do on the switch side? Did it work with lacp on switch side?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I just checked the JHF docs and LACP for MAGG is only supported since JHF 210.
So you have too use XOR (which is often referred to as static bonding) or Active/Standby.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I sounds like you have a problem that you are shutting yourself out by policy. You should still be able to get to the MHO itself and then jump to the SG with: m 1 1
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Configuring bond with active backup and just connecting it to 2 access clans didn't work. Now, I am trying to bridge the mgmt interfaces and see if it works.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Created a bridge interface for mgmt but still it didn't work.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
As said I still think you FW policy is not allowing you in, you say that as soon as you install the policy it fails, so to me that really sounds like the policy is not allowing you and this has nothing to do with bonding or Maestro related issues.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Thank you for the update. I also thought it was the issue with policy so I also tested with any any allow rule. But still it didn't work for me.
I will figure out if there is something missing. If it's working in your test environment then it should work in my case as well. BTW, what is the management server OS version you are using? In my case it's r80.30 in mgmt server and it's r80.20 SP in the maestro and GWs
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
👍
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Nischit, we have the exact same setup and was curious if you got this working. We are using both eth1/1/1 (eth1- Mgmt1) and eth2/1/1 (eth1-Mgmt2) for management connectivity, however, we don't have a bond setup. Only eth1- Mgmt1 has the IP configured which we establish SIC with. It is used as the internal interface of our FW cluster.
