Standby Management can't connect to the gateways
We have deployed a secondary management server in Azure. It’s behind a Cloudguard IaaS cluster in Azure.
The Active management server is OnPrem behind a Checkpoint cluster.
There is a site to site VPN between OnPrem and Azure. All gateways and management servers are R80.40.
The two management servers can talk and the standby successfully syncs with the active.
The issue is if I connect to the standby with SmartConsole I see that it can’t connect to any of the gateways (Status: connection is lost). Even if I make it the active one.
I can see that connections from the standby management to the gateways are accepted. On the gateways I see the connection coming in but nothing going back.
I think it’s because the standby management tries to connect to the gateways on their public IPs instead of the private one.
From the management if I ping the gateway on it’s private IP it works but if I ping on the public IP it doesn’t.
Is there a way to force the management to use the private IP of the gateway without changing it via its properties in SmartConsole?
The Clouguard Network HA for Azure deployment guide mentions this:
“In the IPv4 address field:
If you manage the cluster from the same Virtual Network, enter the Cluster Member's private IP address. Otherwise, enter the Cluster Member's public IP address.”
So currently it’s set with the Cluster member’s public IP because it’s managed by the management server located OnPrem.
Does this mean that if the standby management server becomes active the first thing we need to do is change the IPv4 address to the cluster member’s private IP?
Management uses the main IP to communicate with the gateways.
The assumption is both the primary and secondary management can use the same IP address to communicate.
In this case, the NAT happens through Azure when the communication originates from external, but that NAT doesn't happen when it originates inside Azure.
I suspect you are correct: you will have to change the main IP.
You can also try assigning the public IP to the gateway on the loopback interface to see if that works (but do it during a maintenance window in case it has an impact).
The way to configure a Cluster, is as described by Phoneboy.
Cluster main address: Public IP
Member main IP: private address
Then there's usually no need for workarounds.
If this is not the case here, I would change the configuration accordingly before playing around with routing or whatsoever.
(There may be some exceptions, for sure)
You may want to try using route tables and a user defined static route to get your Azure management to talk with the cluster.
On the Azure Management server try static route pointing to the cluster's members individual public IPs as well as its public vIP to go through its internal interfaces.