- CheckMates
- :
- Products
- :
- CloudMates Products
- :
- Cloud Network Security
- :
- Discussion
- :
- Do Transit VPC support clusterXL?
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Are you a member of CheckMates?
×- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Do Transit VPC support clusterXL?
Hi guys,
I wanna know if you ever did transit vpc architecture by clusterXL?
We are trying to deploy clusterXL in both transit vpc and spoke vpc.
and we designed to run vpn tunnel via vpc peering with OSPF.
but we found the stanby member in spoke vpc could not get the internet access(the return packet is dropped by GW in transit vpc).
I read the documents about transit vpc,all the examples are based on single GW.
so any problem with clusterXL?
Regards
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
My problem is that i have never heard of transit vpc and can only find it in sk120534 (vSEC for AWS, CloudGuard for AWS), sk130372: Security Management Server with CloudGuard for AWS and sk111013: AWS CloudFormation Templates. So i would have TAC answer that...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Gunther,
I referred to sk120534.and the "Transit VPC for AWS Deployment Guide" in it.
But all the examples are using BGP and single GW.No one mentioned the clusterXL situation.
Regards.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
You may be bumping into the limitation of cluster XL, not the transit VPC configuration:
“High Availability Support for OSPF Gaia supports the OSPF protocol in clusters configured either via VRRP or ClusterXL. In this configuration, the cluster becomes a Virtual Router. The neighbor routers see it as a single router, where the virtual IP address of the cluster becomes the router ID. Each member of the cluster runs the OSPF process, but only the master actively exchanges routing information with OSPF neighbor routers. When a failover occurs, a standby member of the cluster becomes the master and begins exchanging routing information with the neighbor routers.”
I suspect that there is a good reason for CP using BGP for dynamic routing via IPSEC VTIs.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Vladimir,
So I assume we could not use OSPF in my scenario,right?
If I didn't have the OSPF route in standby member in Spoke VPC,it cannot connect to Internet to fulfil the API call,so the ClusterXL failover would be fail.
Next step ,we are going to use BGP ,so BGP didn't have the limitaions like OSPF on CP?
Regards.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Transit VPC failover designed with two gateways in different AZs.
Your redundancy is then achieved via BGP convergence and are using Route-Based VPNs to establish connectivity between Hub and Spokes.
ClusterXL redundancy relies on both members being in the same AZ.
If you are using or planning to use Domain-Based VPNs, use Cluster in a single AZ.
If you have no need of the Domain-Based VPNs, use transit VPC.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Vladimir,
This is our designed architecture.
The customer want redundency both inside AZ and between AZ.
in AZ1 , CP-1 and CP-2 are members of cluster and the same in AZ2.
My question is :Could we use both ClusterXL and Transit VPC in the same time?
In my lab,once I used ClusterXL ,the Transit VPC could not work as designed.
Regards,
Dawei Ye
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dawei,
The Transit VPC relying on VPN between CP in a Hub VPC and the AWS VGWs in the spokes.
What you are showing in your diagram, referencing CP gateways in each spoke.
This defeats the purpose of the Transit VPC architecture, as it meant to be a cost saving and consolidation architecture.
Please clarify what king of lateral traffic and between which VPCs and the Internet you are planning to inspect. Or, better yet, describe what is the overall goal that you are trying to achieve.
Regards,
Vladimir
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Vladimir,
Yes , we deployed CP gateways in each spoke VPC.
and our goal is to use Transit VPC to inspect traffic between spokes.
And we designed to use cluster in each AZ and run BGP between clusters.(Only difference between Transit VPC guide is that we use cluster rather than a single gateway)
The redundency inside a AZ we have cluster and the redundency between AZ we use BGP to implement.
Only all the gateways in a AZ are down,will another AZ be using.
And all the Internet access from spokes VPC will go through the Hub VPC.
Regards,
Dawei
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
From the design perspective, if you already have Check Point gateways in each of your spokes, why not simply configure a mesh VPN between them? it will require IGW in each VPC though, but it should not reduce the security posture as all egress traffic will be going through the Check Point and you will have a universal visibility into the traffic as all of the gateways will be managed and logging to the same management server.
From what I understand, the Transit VPC, as designed, relies on the baked-in automation to provision VGWs, establish VPN tunnels between those and VTIs of each Gateway and configure dynamic routing on top of those.
In case of one of the Hub GWs failures, BGP is taking care of the routing failover only on top of established VPN tunnels and VGWs are aware of it.
In your scenario, as depicted, you are completely ignoring the automation piece and have to provision everything manually.
Do you actually have multiple VTIs in each GW in each Cluster member, each connected to every other VTI of every GW in each Spoke?