Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted

Cloudguard backend routing problem

Jump to solution

We're installing a CloudGuard IaaS High Availability using the latest deployment guide.

We experience problem on the internal routing, the internal load balancer, automatically created with the template, seems not to route the traffic to the cloudguard appliance.

On the management we do not see any traffic logs but if we configure a cluster ip address on the checkpoint backend network  using the address that should be configured to the backend-lb (.4) suddenly we see the traffic on the management, even the traffic from internet...

The routing table assigned to the backend subnets and the routing on the checkpoint are configured as described on the guide. (strange that checkpoint route to a phantomatic .1 address and the internal subnets route to the backend loadbalancer ip .4)

Any idea how to debug and solve this problem?

Thanks

Andy

 

0 Kudos
1 Solution

Accepted Solutions
Highlighted

I don´t think that this is correct in the deployment guide:

<and on the backend udr to set a route 0.0.0.0/0 to none.

I would set the default route to the VIP of the internal LB otherwise the return packtes will not reach the firewall.

 

Matthias

 

View solution in original post

8 Replies
Highlighted
Silver

On the Back End Connection you route traffic to the .1 Address in the Back End Network as that is assigned to an Azure Router that then puts the traffic into the Azure Software Defined Networking so can reach the other Subnets/Virtual Nets that may be behind the Firewall.

The BackEnd Subnets route traffic to the Load Balancer address as the Load Balancer performs a health check against the two firewalls backend interface ip using the health pool.   Only the Active Member in the Cluster responds so the Back End Load Balancer then forwards the traffic to the Active Member.

Upon failover then the newly Active Member starts to responds and the Standby Member doesn't so traffic sent to Second Member instead.  A lot better then how used to work with the API reconfiguring all the backend network UDR.

The Cluster IP on the Backend not certain will failover with the Cluster and you certainly shouldn't have to do it.

Sounds as though the Back End Load Balancer not deployed properly to me.

So would suggest that probably delete and recreate in Azure.

Someone with more Azure experience ( I am usually working with an Azure person when I do Cloudguard ) maybe able to help further with debugging but I would suggest that the Back End Load Balancer not correctly deployed.

0 Kudos
Highlighted

We're still experience problems and we're working with checkpoint to solve it.

In addition to this we attach also an ExpressRoute connection following the SK110993

https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails=&solut...

In this SK network diagram there's only one cloudguard gw.

In our case we have an HA solution with frontend and backend subnet and load balancers in it.

Now the problem is that azure accept only one internal balancer for vnet so it's not possible to create another load balancer for the dmz interface connected to the ER gateway, for that reason we connect the gateway subnet directly to the internal load balancer without having a dedicated DMZ and we add a route on the gateway subnet to the internal load balancer ip.

Now if we try to ping a VM from the onprem network through the ExpressRoute to a VM behind the cloudguard we got a drop on the cloudguard because the "ICMP reply do not match a previous request", so it seems that the traffic has been sent to the VM without passing on the cloudguard.

Anyone has an idea what could be the problem?

Thanks.

Andy

 

 

 

 

 

 

0 Kudos
Highlighted

have you defined a UDR for the subnet in which the express route is configured (called GatewaySubnet) to forward packets for the destination subnet/VNET to the virtual IP on the internal LB ? If not, the echo request will be forwarded directly to the destination VM

 

 
0 Kudos
Highlighted

yes we did it but it seems to be ignored.

 

 

0 Kudos
Highlighted

make sure the size of your routes are matching, e.g if you have a VNET of /24, then a UDR of /22 would not override that /24 network route

0 Kudos
Highlighted

Problem solved.

the UDR on the gateway subnet had the right route but it was not associated to the gateway subnet.

after associating the udr the traffic was correctly managed by the cloudguard.

anyway thanks for your support.

andy

 

Highlighted

Now we're facing another strange problem on the backend routing.

Internal servers are able to connect to internet through the cloudguard cluster and SNAT.

The problem is that we're unable to access the internal servers through cloudguard and DNAT.

Checkpoint seems correctly DNAT and forward the packet to the internal servers but the dump on the internal interface show that the ACK packet are not getting back.

At this point we were wondering why the same server were able to be reached by our Express Route link through Cloudguard without NAT.

For test we setup a nat rule that do DNAT and SNAT (with an ip address of the backend subnet) and we were able to access the server!

This means that the backend subnet is not able to route traffic with public source ip address...

As described on the cloudguard deployment guide on the checkpoint we need to simply setup a route to the .1 internal address on each subnet and on the backend udr to set a route 0.0.0.0/0 to none.

Any idea how to solve this source routing problem?

thanks

andy

 

 

0 Kudos
Highlighted

I don´t think that this is correct in the deployment guide:

<and on the backend udr to set a route 0.0.0.0/0 to none.

I would set the default route to the VIP of the internal LB otherwise the return packtes will not reach the firewall.

 

Matthias

 

View solution in original post