Create a Post
Showing results for 
Search instead for 
Did you mean: 


I have several VPNs against AWS, it happens that at random there is no more traffic.  


When the fault occurs, there are the following symptoms:


-Up Tunnel

-Phase 1 and Phase 2 established  


The problem is resolved when we restart Ike at the checkpoint (vpn tu - 7), but after a while it happens again. The configuration of my Tunnel is as follows:


IKv1 Phase I.


-Encryption Algorithm: AES-128

-Data Integrity: SHA1

Diffie-Hellman group: Group 2 (1024bit)


Phase II -AES-128

Data Integrity: SHA1

IKE Security Association (Phase2): Use perfect Forward Secrecy (group 2)


Ike Phase I.

Renegotiate IKE Security associations every (minutes): 480


IPsec (Phase 2):

Renegotiate IPsec security associations every (seconds): 3600 Nat: Disable NAT inside the VPN community


DPD configured in the Cluster and AWS Community VPI and Ping interfaces on static routes


Tunnel Management

-Permanent tunnels: establish permanent tunnels: in all the tunnels of the community.


-VPN Tunnel Sharing: One VPN tunnel per Gateway pair. VPN ROUTING: to center or, even the center, other satellites, the Internet and other VPN objectives


DPD configured in the Cluster and AWS Community VPI and Ping interfaces on static routes


when I see the records, it's dropping by rule clean up


please your support, the tac still does not find the cause

0 Kudos
17 Replies

What version of gateway are we talking about here?
Have you forced DPD on the gateway via the registry in addition to making the necessary change in the gateway object with GUIdbedit?
When it works, what do you see in the logs?
0 Kudos


The problem is presented in several clusters, in version R80.10.

I have enabled DPD in response mode and monitoring mode:


To enable DPD Responder Mode:

Run on each gateway:
ckp_regedit -a SOFTWARE/CheckPoint/VPN1 forceSendDPDPayload -n 1

Enable the keep_IKE_SAs property in SmartDashboard to prevent a problem, where the Check Point gateway deletes IKE SAs:
In SmartDashboard, go to Global Properties > SmartDashboard Customization > Advanced Configuration > VPN advanced properties > VPN IKE properties.
Change keep_IKE_SAs to true.

To enable DPD monitoring:

On each VPN gateway in the VPN community, configure the tunnel_keepalive_method property, in GuiDBedit Tool (see sk13009) or dbedit (see skI3301). This includes 3rd Party gateways. (You cannot configure different monitor mechanisms for the same gateway).

In GuiDBedit Tool, go to Network Objects > network_objects > <gateway> > VPN.
For the Value, select a permanent tunnel mode.
Save all the changes.
Install policy on the gateways.

0 Kudos


Yes, enable DPD via GuiDB and gateways.

Both forms are active:

-DPD Mode response
-DPD Mode monitoring
0 Kudos

Hi Juan,


Did you manage to fix your issues with aws <-> Checkpoint VPNs? As we have the same behavior for any of our gateways.



0 Kudos

Any fix? We have the same issue
0 Kudos

Any solution ? We have observed the same problem already on two different GWs.

But in our case traffic is not working anymore, tunnels are UP /Phase1 and Phase2 and we see that we are sending packets over VPN and also some packets are reaching GW back, but nothing on inside Interface.

We have lost 3 partner connections to AWS already without resolution for about 3 weeks... Tons of troubleshooting with CP support, but still nothing ...

0 Kudos

Following this thread. We have the same issues with site to site tunnels to AWS.

5600 gateway devices running R80.20.

0 Kudos

I have the same problem 5900 with 80.10 , after changing to 6700 with 80.40 the same problem..

0 Kudos

Hi, we found solution, problem is with NAT on AWS. To avoid these to happening we have added additional NAT rule, where "Original Source" is set to CP internal interface and all the other NAT fields are not changed. It helped to handle NAT properly on AWS side ... 

0 Kudos

where do you find the Nat rules in the AWS site?

0 Kudos

No, it is on CheckPoint itself.

0 Kudos

so you added a rule in the nat rules  of cp like that? 

internal_int_cp    internal_net_aws  original  , and in the vpn community also " Disable Nat inside the vpn community " ?

do you think vpn resets can happened because MTU is 1500? 

its a new 6700 with 80.40. hf48


0 Kudos

Just "internal_int_cp" was enough.

0 Kudos

Hello Everyone


I hope you guys are well


We got the same behavior with the difference that the tunnel became down only on the weekend. We must do the same steps (VPN TU option 7 or reset tunnel from SmartView Monitor).

what do you mean with "internal_int_cp"

Original Source= internal_int_cp   Orginal Destination= original     Traslated Orginal= Orginal   Traslated Destination=original

Thank you so much!


best regards


0 Kudos

internal_int_cp = CheckPoint assigned private ip address (on interface and VIP for cluster), from where you are initiating IPSec tunnel. 

0 Kudos


We had the same issue when traffic was not passing through the vpn tunnel.

The solution was to set up an SLA on a cisco router which does periodically pings towards a host on the other end of the vpn tunnel in AWS.

0 Kudos

Permanent tunnels and the advanced "keep_IKE_SAs" setting in sk142355 have resolved several issues that I've seen with AWS

0 Kudos