Communicating AWS vSEC with On-Prem SMS and GW
I have some queries regarding AWS vsec and on prem communication
1. I have added AWS cloudguard (CG) instance on our on-premise SMS through CG public IP address. This has been successfully added and SIC established. Is this the best way to add CG.
2. I have configured VPN between on-prem GW and CG. This is not being established due to certificate error as also mentioned in previous update. On further checking the logs of CG, i saw it could not retrieve CRL.
3. One VPN is being negotiated, does communication to CG Public IP including retrieving CRL go through VPN
4. We have seen this that communication to external GW Public IP (which is also peer IP address for VPN) stops working. Is there anyway to exclude this so CG can keep communicating with on prem servers
5. We are unable to see logs from this CG. The reason could be that log servers have local IP address on their object which is not recognised by CG.
I would appreciate if somone can advise on what are the best practices around the above queries.
- vsec aws
I don't have any experience with AWS CloudGuard but I do with Azure CloudGuard and your questions are identical to the "fun" I had with that setup.
Can you explain how your on-prem SMS is reacheable from the internet? Is it NAT'ed behind your on-premise gateway?
We also encountered SMS NAT issues and you might need to follow sk66381 and NAT control connections and specify your on-prem gateway as installation target. If you are lucky and have another Check Point gateway managed by this SMS it should be easy.
If not, welcome to the world of dummy management objects to manipulate the conf masters file... let me know if you need more info on that.
Thanks for your reply.
Our on-prem SMS is like you described NAT'd to a single IP behind our on-prem gateway with our on-prem gateway the target.
I'm seeing that the Cloud GW is sending request to SMS Private IP. So i guess i would have to create a dummy MGMT Object. How does it actually take effect on this particular gateway.
Hi Jeroen and Dameon,
Thanks for the assistance. I got that bit working with dummy Management IP addresses and Static NAT configs.
The VPN seems to be coming up from one end i.e. on Smartview Monitor vsec GW shows tunnel up however on-prem shows down. On checking tcpdumps and fw monitor outputs, the vsec gateway shows the local ip address of on-prem for vpn peer and sends back tunnel management.test traffic encrypted through the tunnel.
Don't think it should be this complex in getting VPNs up. Something configured wrong here.
I checked link selection addresses on both GW and it showed public IP addresses
I was hoping someone with AWS experience would have replied by now. I only have experience with Azure and there Link Selection on the vSEC gateway has to be set to its local external IP (a private IP). Azure then NATs this to a public routable IP. The reference architecture guide (sk109360) mentions this. So maybe for AWS you need to change it from public IP to private IP and have AWS do some NAT magic.
You say the vsec gw shows the local ip of on-prem. Is your on-prem gateway nat'ed then? It doesn't have an internet routable IP on its WAN interface?
I tried changing Link selection IP address to the Private as well as public IP address provided by AWS.
Our on-prem gateway has a direct Public IP address on its interface which is the address used to form VPN peering. However, this IP address does not show up on vsec GW as peer.