Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Nischit
Contributor
Jump to solution

Maestro SG management interfaces bond

Hello,

I have 2 MHO 140 Orchestrators in redundancy. Its running on r80.20 SP. I have already installed the latest hotfix during the time of installation.   It's in a production environment deployed a week ago with 4 GW's.  I have configured bond interfaces for all the uplink connections and its working fine.

I then created a bond for SG Management interfaces (  eth1/1/1 eth1- Mgmt1 and eth2/1/1 eth1-Mgmt2) and its working. I am able to establish SIC with the Management server with the same bond interface IP address of SG. You will be able to install the policy successfully once. However, after the successful installation of the policy, the Management Server will no longer be able to reach the IP address of SG which is configured in the bond interface.

Even, I am unable to browse the web GUI of SG. I then removed the bond interface and assigned new bond interface for SG mgmt and again the same issue. I am able to reach the mgmt server and able to establish SIC. Then after the policy installation, the mgmt server is unable to reach the SG IP address. 

Then, I removed the bond interface of SG management interfaces and assigned an IP address on the physical Mgmt Interface of SG and it worked fine. 

It seems like there is an issue if we bond the SG mgmt interfaces. It's working fine with the bonds created for uplink. Has anyone tried bonding the SG Mgmt Interfaces and tried installing policy from the Mgmt Server?  

Thanks,
Nischit

1 Solution

Accepted Solutions
Nischit
Contributor

Hello,

I bonded as Magg at first, it works fine (SG is reachable) and I am able to establish SIC from the Mgmt server. But after installing the policy, SG becomes unreachable from the MGMT and vice versa. Then, I bonded with LACP but still the same issue.

Afterward, I removed the bond and assigned the IP on the physical interface then it works.

It is a  GW, not a VSX. 

Even I detached all the GWs and configured the FTW again from the Maestro web UI for all the attached GWs. 

I strictly followed the admin guide as well.

View solution in original post

18 Replies
Norbert_Bohusch
Advisor

Yes, before I deployed for a customer a dual site deployment, we had it configured as single site with 2 MHOs and had the mgmt1 of both MHOs bonded as magg.

How did you configure it? How/When was FTW run?

Is it a GW or VSX?

Which bonding mode are you using? 
Did you strictly follow admin guide? Because there’s a chapter about it.

Nischit
Contributor

Hello,

I bonded as Magg at first, it works fine (SG is reachable) and I am able to establish SIC from the Mgmt server. But after installing the policy, SG becomes unreachable from the MGMT and vice versa. Then, I bonded with LACP but still the same issue.

Afterward, I removed the bond and assigned the IP on the physical interface then it works.

It is a  GW, not a VSX. 

Even I detached all the GWs and configured the FTW again from the Maestro web UI for all the attached GWs. 

I strictly followed the admin guide as well.

Maarten_Sjouw
Champion
Champion
Have a look at the basic setup manual v1.2 it holds all the info regarding the MAGG bonding for 2 MHO's in 1 site.
But at first glance I see where you went wrong, all eth1-xxx interfaces are on MHO1 while all eth2-xxx are on MHO2 so you need to build a bond between the eth1-Mgmt1 and eth2-Mgmt1
The way to build this would be like this procedure (I just found some typos in this part, sorry):
add bonding group 1 mgmt
set interface eth2-Mgmt1 state on
add bonding group 1 mgmt interface eth2-Mgmt1
set bonding group 1 mode active-backup
set interface magg1 ipv4-address 1.2.3.11 mask-length 26
set management interface magg1
delete interface eth1-Mgmt1 ipv4-address
add bonding group 1 mgmt interface eth1-Mgmt1
set bonding group 1 primary eth1-Mgmt1
Regards, Maarten
Nischit
Contributor

Hi, 

Yes, I am aware of this, Eth1-xxx interfaces are on MHO1 while all eth2-xxx are on MHO2. I did the same. However, while writing in this community, I mentioned eth1-mgmt1 and eth1-mgmt2. I created a bond for eth1-mgmt1 and eth2-mgmt1.

I followed the same steps as you mentioned earlier on but this didn't help so I posted over here. 

The only issue is when I install the policy in mgmt server, the SG ip becomes unreachable.

I will try this again today and let you know. Thanks! 

Nischit
Contributor

Hi, 

 

If I do magg in the Checkpoint then what config should I do in the switch side. Cause if I do bond with lacp, I configure the port channel interface and configure protocol-mode lacp. 

Norbert_Bohusch
Advisor
MAGG also supports LACP with one of the latest JHF.
With older JHF only A/S and XOR (static) are supported.
0 Kudos
Nischit
Contributor

Hi, 

We have installed the jumbo hot fix 191. 

Nischit
Contributor

Hi, 

As you told that you also configured bond using xor. So, what did you do on the switch side? Did it work with lacp on switch side? 

Norbert_Bohusch
Advisor

I just checked the JHF docs and LACP for MAGG is only supported since JHF 210.

So you have too use XOR (which is often referred to as static bonding) or Active/Standby.

Maarten_Sjouw
Champion
Champion
I configured the Bond Active Backup and the switch without bonding, just 2 access ports in the same VLAN.
I sounds like you have a problem that you are shutting yourself out by policy. You should still be able to get to the MHO itself and then jump to the SG with: m 1 1
Regards, Maarten
0 Kudos
Nischit
Contributor

Hi, 

 

Configuring bond with active backup and just connecting it to 2 access clans didn't work. Now, I am trying to bridge the mgmt interfaces and see if it works. 

 

 

Nischit
Contributor

Hi, 

Created a bridge interface for mgmt but still it didn't work. 

Maarten_Sjouw
Champion
Champion
I di failover tests with it this way and it worked just fine for me.
As said I still think you FW policy is not allowing you in, you say that as soon as you install the policy it fails, so to me that really sounds like the policy is not allowing you and this has nothing to do with bonding or Maestro related issues.
Regards, Maarten
0 Kudos
Nischit
Contributor

Hi, 

Thank you for the update. I also thought it was the issue with policy so I also tested with any any allow rule. But still it didn't work for me.

I will figure out if there is something missing. If it's working in your test environment then it should work in my case as well. BTW, what is the management server OS version you are using? In my case it's r80.30 in mgmt server and it's r80.20 SP in the maestro and GWs

Maarten_Sjouw
Champion
Champion
I am running this against a R80.30 and a R80.40 MDS, I have 2 SG's running and the first is connected to the RT80.40, the other SG is hooked up to a CMA in the R80.30 MDS.
Regards, Maarten
0 Kudos
rolf
Participant

👍

0 Kudos
Raj_Khatri
Advisor

Hi Nischit, we have the exact same setup and was curious if you got this working.  We are using both eth1/1/1 (eth1- Mgmt1) and eth2/1/1 (eth1-Mgmt2) for management connectivity, however, we don't have a bond setup.  Only eth1- Mgmt1 has the IP configured which we establish SIC with.  It is used as the internal interface of our FW cluster.

0 Kudos