- CheckMates
- :
- Products
- :
- CloudMates Products
- :
- Cloud Network Security
- :
- Discussion
- :
- Re: Custom routes for CloudGuard IaaS in GCP
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Are you a member of CheckMates?
×- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Custom routes for CloudGuard IaaS in GCP
I have a question about route injection when using CloudGuard High Availability Clusters on Google Cloud Platform. I know that this behavior is determined in $FWDIR/conf/gcp-ha.json which looks like this, by default:
{
"debug": false,
"public_ip": "mycluster01-primary-cluster-address",
"secondary_public_ip": "mycluster01-secondary-cluster-address",
"dest_ranges": ["0.0.0.0/0"]
}
Instead of a default route, I'd like to advertise the RFC-1918 routes to the internal VPC network. So I modify the last line like this:
"dest_ranges": ["192.168.0.0/16", "172.16.0.0/12", "10.0.0.0/8"]
And perform a reboot. I'd expect all 3 routes to be injected, but I only see 192.168.0.0/16.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi johnnyringo,
Can you elaborate on why you are trying to change the route on the gateway itself? Routing generally is set on the other side (GCP portal).
I also found this link that might be relevant - https://cloud.google.com/vpc/docs/routes
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
With HA deployments, the static routes must be injected by the CheckPoint gateway for outbound traffic to be correctly routed to the active member.
Simply adding static routes in GCP is fine for standalones, but not for HA clusters.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi johnnyringo,
We added support for multiple destinations ranges only for higher builds, please see sk147032 for the full list.
You are probably using an older build, so in order to use multiple destinations, please upgrade to the last build.
For more details see CloudGuard Network High Availability for Google Cloud Platform R80.30 and Higher Deployment Guide.
If it's not the case, please Contact Check Point support, and request to open a ticket.
Thanks,
Natanel
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
OK, thanks. I took a closer look at $FWDIR/scripts/gcp_had.py and see this line:
def add_route(name, network, priority, next_hop_instance, range=None, project=None):
if not range:
range = conf['dest_ranges'][0]
To support multiple ranges, it would of course need to treat dest_ranges as a list, and then do a for loop or something similar.
We are currently on R80.40 Jumbo HF T158 with no plans to upgrade until next year. Could the gcp_had.py script be copied from an R81.10 gateway and still work in R80.40?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
We do not officially support it, but yes, it should work for you.
If you choose to take the risk, please follow the below:
- Backup gcp_had.py script on both instances.
Run: mv $FWDIR/scripts/gcp_had.py $FWDIR/scripts/gcp_had.py.backup - Extract the attached script, and put it under $FWDIR/scripts/gcp_had.py for both instances.
- Kill the GCP daemon. Run in the Expert mode:
ps aux | grep had
killall had - Make sure the process is running, run: ps aux | grep had
- In the Expert mode, run: cpwd_admin list | grep -E "PID|GCP_HAD"
- Test it again
Thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I've started looking at this and am afraid it'll require quite a bit of work. For starters, there's no provisions for scenarios where there's multiple clusters in the same project, on the same internal VPC network.
I did this as a quick fix, to automatically set CHKP_TAG as the cluster name, which should always be unique:
import platform
CHKP_TAG = platform.node()[0:-8]
Some background on the route tagging requirement- our Google account team recommended using a single VPC for all regions to get the benefit of central management as we horizontally scale to multiple regions (currently at 8, plan to be utilizing 11 by end of year). The main caveat is static routes with instances as the next hop are not region aware and essentially just use ECMP. This will result in a VM located in, say, europe-west2 accessing internet via australia-southeast1.
I'll continue working on it, but would really recommend CheckPoint consider addressing these requirements in future versions. Our use case is not abnormal or unique - any customer using multiple regions and following GCP best practices would encounter this issue and would potentially trigger an outage by bringing up a second cluster on a network that's got an existing one.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
It may be more customization than you're willing to do, but I think you can do what you're interested in via network tags on both the injected routes and the regional instances (assuming the end points are GCE instances).
So using your CHKP_TAG modification, it appears that you can modify the create_route_request function in gcp_had.py to include that variable as a network tag. The code already has a fixed empty array. You may not even need to pass the CHKP_TAG in the function call. Just fix the tag with an array that includes that variable.
def create_route_request(name, network, range, priority, next_hop_instance,
project):
logger.info('Adding route "' + name + '":' +
'\n\t network: ' + network +
'\n\t range: ' + range +
'\n\t priority: ' + str(priority) +
'\n\t next hop instance: ' + next_hop_instance)
path = '/projects/{}/global/routes'.format(project)
route = {
'destRange': range,
'name': name,
'network': '/projects/{}/global/networks/{}'.format(project, network),
'priority': priority,
'tags': [],
'nextHopInstance': next_hop_instance
}
return gcp('POST', path, body=json.dumps(route))['selfLink']
which should inject the route into the VPC using the tag.
This would be similar to the guidance in sk114577 "Check Point CloudGuard IaaS reference architecture for Google Cloud Platform" at the end of section (4) Deployment where it indicates the deployment will automatically create network tags for the gateway instances and then it references those in section (8) Inspecting traffic where it has you create a default outbound route associated with the same tag.
Then if your source systems (assumption: GCE instances) also have the same network tag, it should preferentially use the tagged route which matches the network tag. Assuming the network tags are unique per region (and they should be based on your CHKP_TAG variable), then it should route traffic "regionally".
It's slightly unclear from Google documentation how the network tag and routing interact. It appears that the network tag impacts the selection of "applicable routes" (https://cloud.google.com/vpc/docs/routes#applicable_routes), so in your example, the europe-west2 instances would match only the europe-west2 CHKP_TAG route and the rest of your RFC1918 routes would not be applicable to europe-west2 instances since they would have a network tag which does not match the network tag for europe-west2. Then it would merely be up to the routing order (https://cloud.google.com/vpc/docs/routes#routeselection) to be certain that you're routing as you expect.
It seems feasible, but obviously some testing would be required.