CloudGuard (VMSS & Cluster) deployment in Azure - mono RG
I'm wondering if it is possible to implement NorthBound VMSS instances and SouthBound Cluster instances in one Resource Group via Azure deployment template - cf. the diagram below?
Is it possible to modify the template in Azure in order to change this restriction? If so, do you have any documentation?
Same question for the VNet, is it possible to have both North/South hubs in one VNet?
VMSS cannot be used for outbound traffic.
Not sure how Geo ClusterXL would work with inbound traffic.
It certainly would not be scalable the way VMSS is.
Not aware of any specific limitation in regards to putting the two in the same VNet.
My question is: why is this relevant?
I'll use VMSS for Inbound Internet traffic and ClusterXL for Outbound Internet and E/W traffic as described in the blueprint.
I'm limited with one RG and one VNet for the deployment of both hubs, in the Azure subscription I have.
When launching the Azure template, I'm facing with the fact that RG should be empty and the VNet created during each template. So 2 x RG and 2 x VNet.
So, is there any information/documentation about the template ARM modification that I can do?
As explained in the reply below, we don't need to keep the frontend public facing LB, as we already implemented a VMSS in NorthBound hub for this purpose.
When we've in touch with Check Point guys, they told us modify the template instead of deleting the LB manually, as the template will keep it and in case of updates, the LB will be created again.
are there any documentation regarding this comment "VMSS cannot be used for outbound traffic."
because it contradicts on this documentation:
We support inbound, outbound, E-W and Remote Access flows with the VMSS solution, as described in the admin guide.
Make sure to deploy the VMSS with public IP addresses or with Frontend Public Load Balancer (this is configurable during template deployment).
We deployed CG with VMSS and is also Clustered in HA. the only thing is we don't have Public IP address configured for the CG instances. for some reason we cannot edit the eth0 NIC of both CG Instances to configure a Public IP Address.
The challenge starts when the CG cannot reach the internet nor the VMs behind it. Since its just a Firewall and Routing we need to have a Default Route pointing to wherever the next hop is. The problem is we dont know who will be our Next Hop is.
Is it the External LB?
or is it something else?
Given the situation we have and as much as possible we don't want to re-deploy the CG as the inbound traffic traversing CG is already in Production, what will you guys suggest? If you have any recommendations can you give light to it with screenshots provided.
In order for outbound traffic to be allowed by Azure, the VMSS instances must either have an instance-level public IP, or be in the backend pool of a public load balancer.
For next-hop on the instance, you will not need to modify anything as the instances are pre-configured with the correct next-hops.
So, in case you have deployed the VMSS without a public load balancer, you can:
1. Deploy one manually (although by default we deploy a public load balancer with the vmss solution, unless you chose not to use it)
2. Set the health check to port 8117 (screenshot attached)
3. Attach the vmss eth0 as the backend pool (screenshot attached)
3. Create one load balancing rule (example attached)
Yes we have done this for the inbound traffic. The the outbound traffic is ok if that is just an outbound return traffic. But if the outbound traffic is initiated from an internal server VM behind the CG it cannot reach the internet even the CG itself.
We also tried creating additional NIC for the CG and bind a public IP address to it and set the default route to that interface since we don't know the next hop IP address but still not working. We are not sure how outbound routing works on Azure side once it leaves the CG already.
Can you help us for the proper routing configuration for Outbound Internet Traffic on the following?
2. Frontend Subnet
3. External Loadbalancer/NAT Gateway if applicable.
This is for both CG with/without Public IP on VMSS
I come back to my Azure deployment. So, I have now 2 RG and one VNet with 4 subnets: 2 for NothBound (front and back) and 2 for SouthBound (front and back).
I deployed the VMSS in NorthBound RG with one external LB for inbound Internet traffic.
I've deployed also the HA (cluster) in SouthBound RG for outbound Internet traffic and E/W traffic. But I found 2 LB deployed, contrary to VMSS template, in this one we cannot choose the number of LB.
- one Frontend-lb with a public IP
- one backend-lb
In my case, I don't need the frontend-lb as the inbound Internet traffic will be handled by the NorthBound firewalls.
So I'd like to redeploy the template by modifying the Azure json template file. However I'm facing with multiple errors with the '_artifacts Location’ parameter, and need your help for resolving this issue.
- By default the value added is [deployment().properties.templateLink.uri] --> this value is not accepted:
- When I put the value retrieved in the default template file (https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#), the concat is not performed as expected (https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#nestedtemplates/vnet-...).
I obtained the result below:
- I entered the value directly in the template file in the line "networkSetupURL". But the deployment failed again with a different error:
I tried differents way to overcome this behaviour in vain ...