Hello,
I would like to share an experience I had with a deployment of VPN between CheckPoint and AWS side. I was configuring only CheckPoint side.
Public IPs and encr. domains are randomly selected for this example.
Company A (my side):
CheckPoint R81.10 (cluster)
Public IP: 7.7.7.7
Encr. domain (10.10.10.0/24), but application server is under different IP (e.g. 10.10.20.50).
Company B:
AWS Gateway:
Public IP: 1.1.1.1 (tunnel 1) and 2.2.2.2 (tunnel 2)
Encr. domain (172.16.0.0/24)
Devices on our side are not connecting to 172.16.X.X, but there’s also NAT for Company B encr. domain, let’s give it 172.17.0.0/24.
In summary:
Connection from our side: server 10.10.20.50 connecting to 172.17.0.35 port 22. CheckPoint will change both source and destination for src:10.10.10.50 and dst:172.16.0.35 port 22 before sending it through the tunnel interface to AWS.
Okay, guy managing AWS side sent me AWS configuration text file for CheckPoint vendor. There are instructions how to deploy everything on CheckPoint side. I configured tunnel interfaces, static routes, etc. Config file instructed to use "Empty_Group" as encr. domain for both interoperable devices 1.1.1.1 (tunnel 1) and 2.2.2.2 (tunnel 2) and then choose "One VPN tunnel per Gateway pair" in the "VPN Tunnel Sharing". It means that AWS will have Local and Remote IPv4 Network CIDR in default, so 0.0.0.0/0.
However, guy managing AWS demanded to select specific encr. domain. I didn’t agree as config file instructed to use empty group, but it was worth to test. I added both 172.16.0.0/24 and 172.17.0.0/24 networks to that previously empty group objects for both interoperable (1.1.1.1, 2.2.2.2) devices and switched to "One VPN tunnel per subnet pair" in VPN star community. Surprisingly it worked, both AWS gateways and CheckPoint nicely negotiated <172.16.0.0 - 172.16.0.255><10.10.10.0 - 10.10.10.255>, both tunnels were UP. The only downside was that, when I was previously doing VPNs to AWS with “gateway pair/0.0.0.0/0” I was able to ping remote 169.254.X.X IP address of AWS tunnel interface from CheckPoint firewall CLI. But with specific encr. domain configuration, ping didn’t work. In summary, tunnels were UP, connection from my side servers to server to AWS was stable, no problem at all. All good.
--------------------------------------------------------------------------------------------------------------------------------------
Couple months later, there’s was a request to deploy new VPN to AWS for another customer, the same scenario, only difference was that in this case, they were connecting to our application server. However I later created test rule to initiate connection from our side, so it really doesn’t matter.
Company A (my side):
CheckPoint R81.10 (cluster)
Public IP: 7.7.7.7
Encr. domain (10.10.10.0/24), but application server is under different IP (e.g. 10.10.30.50) and my test device is 10.10.40.8.
Company C:
AWS Gateway:
Public IP: 3.3.3.3 (tunnel 1) and 4.4.4.4 (tunnel 2)
Encr. domain (172.31.0.0/24)
Devices on our side are not communicating with 172.31.X.X, but there’s also NAT for Company C encr. domain, let’s give it 172.32.0.0/24.
In summary connection from our side:
Company A Before NAT: 10.10.40.8
Company A After NAT: 10.10.10.50
Company C Before NAT: 172.32.0.20
Company C After NAT: 172.31.0.20
I configured this VPN in the same way as previous and surprise, it didn’t work. I tried several changes, troubleshoot. The main problem was that CheckPoint did not negotiate <10.10.10.0 - 10.10.10.255> <172.31.0.0 - 172.31.0.255> in Phase 2, but tried to negotiate public IPs, so <7.7.7.7><3.3.3.3 > or <7.7.7.7><4.4.4.4> based on peer. Then AWS side, of course reject that proposal. First thing that helped was uncheck “Set Permanent Tunnels”, then CheckPoint finally used correct encr. domain pair for one of the peers, but for the second it used again public IPs. It depended which tunnel firstly agreed on correct encr. domain, so for the other one, CheckPoint tried to negotiate public IPs that was of course rejected from AWS side.
Then I looked at log for VPN between our side and Company B and I noticed a difference. In previous deployment with Company B, AWS side is constantly negotiating Phase 2, both their peers 1.1.1.1 and 2.2.2.2 are sending Key Install with the same <172.16.0.0 - 172.16.0.255><10.10.10.0 - 10.10.10.255>. CheckPoint accepted the same encr. domain from both peers.
However, for Company C, our CheckPoint is sending Key Install and like I mentioned, one of them is always with Public IPs and that’s rejected. Luckily, I had a screenshot of AWS configurations from both Companies and I noticed a difference.
Company B had VPN tunnel option “Startup action” set to “Start”, however, Company C had that option set to “Add”.
Explanation from AWS web:
Source: https://docs.aws.amazon.com/vpn/latest/s2svpn/VPNTunnels.html
Startup action
The action to take when establishing the tunnel for a VPN connection. You can specify the following:
• Start: AWS initiates the IKE negotiation to bring the tunnel up. Only supported if your customer gateway is configured with an IP address.
• Add: Your customer gateway device must initiate the IKE negotiation to bring the tunnel up.
I asked guy Configuring Company C AWS side, to change option for “Start” and it started working smoothly.
But I still don’t understand WHY CheckPoint can’t send the same encr. domain to two peers, but it can accept it, when other sides negotiate it. Why the behavior of the CheckPoint isn’t the same for both directions?
Last thing I did, I checked a logs for VPN between us and Company B again. Like I mentioned AWS side was negotiating Phase 2, but I adjusted log to search for cases when our side is negotiating, I was just curios, if I will find something. I found (but only a few Key Installs) and surprise, the same problem 🙂 so probably I just didn’t notice back then, or maybe saw them in the logs, but then AWS side was negotiating and that worked, so I probably forgot about it.