- CheckMates
- :
- Products
- :
- General Topics
- :
- Re: VPN gateway random UDP traffic to CP peers?
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Are you a member of CheckMates?
×- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
VPN gateway random UDP traffic to CP peers?
Hi CheckMates,
I'm seeing some strange traffic I cannot explain and I was wondering if anyone knew what is causing this. Our central VPN gateway seems to be connecting to all our remote VPN site's CP gateways with seemingly random high UDP ports. The VPNs are working fine otherwise, but I have no idea which process is causing this. Anyone got any idea why this is happening and how I can stop it?
So in the screenshot below the source is the public IP of the active member of our central VPN cluster, the destinations are various public IP's of the Check Points at our remote sites that are in a star community with the central gateway.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Are the drops out of state or cleanup drops?
Looks like the UDP source port to me that is dropped
Capture will tell you.
I suspect the firewall itself is not starting an UDP connection like this but it is the source port from ESP traffic (500,4500)
If you like this post please give a thumbs up(kudo)! 🙂
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
They are cleanup drops. The peers are not behind NAT-T. And then the destination port should be 500/4500, not some random UDP high port. It's the destination port in the log, not the source port.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
What is the source port, anything consistent?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I've checked the source port, which strangely does seem to be tunnel_test (UDP 18234). So that's even stranger, since I also see seperate succesful encrypt logs for them. It's like it's sending the tunnel tests twice?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
maybe this SK will send you in the right direction
https://support.checkpoint.com/results/sk/sk163835
If you like this post please give a thumbs up(kudo)! 🙂
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I've tried making a No NAT for the tunnel_test traffic, but it's still getting NATted behind the cluster IP (which is fine) and random UDP high ports (which is not, it should stay the standard tunnel_test port) based on an implied rule it seems from the logs.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
UDP 18234 is a tunnel test feature.
The behavior you described could be related to Check Point's VPN tunnel testing feature.
The VPN tunnel testing protocol is designed to ensure that the VPN tunnels are functioning properly and can handle traffic. It periodically sends test packets between the gateways to verify the connectivity and integrity of the VPN tunnels.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Also,
To stop this behavior, you can disable the VPN tunnel testing feature. Here are the steps to disable VPN tunnel testing in Check Point:
-
Log in to the SmartConsole (Check Point management interface).
-
Go to the "Network Management" tab.
-
Select "VPN" from the left-hand menu.
-
In the VPN section, click on "VPN Tunnel Sharing".
-
In the "VPN Tunnel Sharing" window, select the relevant VPN community.
-
Click on the "Advanced" button.
-
In the "Advanced VPN Tunnel Sharing" window, uncheck the option for "Enable VPN tunnel testing".
-
Click "OK" to save the changes.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Not quite sure where you want me to go. The only tab called 'Network Management' that I know of is on gateway objects.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Fair enough, let's try a different approach:
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks, but that doesn't describe anything about disabling tunnel testing.
Also, I don't want to disable tunnel_testing, I want the gateway to stop source port NATing it for some strange reason 🙂
The tunnels themselves all work, but the log is getting unnecesarrily spammed with the high UDP port traffic hitting the cleanup rule, while it should be accepted on an implied rule as a control connection.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I had this but cannot remember the solution.
Is it tunnel testing, permanent tunnels or Dead Peer Detection creating this?
i.e. If on, turn off to test
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Permanent tunnels are enabled, but it's all CP <-> CP, so using tunnel_test. The tunnel_test traffic is getting encrypted and is a completely different port then the ones we're seeing dropped. Tunnel_test is all UDP 18234, hitting the implied rule.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
please share the source ports.
Also maybe traffic capture and VPN debug will tell you more. I think it is DPD
If you like this post please give a thumbs up(kudo)! 🙂
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have a rule of thumb, never treat a Check Point issue with logic 😉
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Is this a joke? If not, can you please elaborate?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
What I really meant but generalized as I say that a lot on conference tshoot calls. Especially when other Vendor FW's are involved it is usually CP weird behavior or CP interpretation of RFC's differ a lot from other vendors - exact example not coming to mind but there was a couple with IPsec VPNs we had to adjust behaviors to make compatible.
Also where customers saying "this 'should' be how it is behaving because of how such and such is configured" which I will not accept as an answer until I see the behavior happening/Not happening.
Then if logic does not prevail it is usually a coding issue.
E.g.1 Return decrypted traffic reaching the FW but not re-entering the tunnel and silently dropped - R&D hotfix.
E.g.2 Cert CRL default changed in jumboHF to OCSP. The FW not able to reach OCSP and should revert to CRL URL but does not. VPN cert auth fails. R&D fix for IkeV2 needed.
I have a few more examples from the past few years I could go find from saved notes.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Did you ever find a solution for this? We are seeing the same thing - I think I know what is happening.
1. Remote Check Point gateway sends tunnel test packet (destination port UDP 18234) to central Check Point gateway.
2. Central Check Point gateway receives packet, and NATs destination to a different interface/IP on the central gateway (as described in sk102729).
3. Central Check Point gateway replies to the tunnel test packet, using source of NAT'd interface
4. Remote Check Point gateway receives the reply to its tunnel test packet, but from different IP address - the IP address of the NAT'd interface on the central gateway - and drops the packet.
Our logs are filled with dropped traffic with source port of UDP 18234 and destination port of random high UDP port (which correspond to source port of the original tunnel test packets).
I don't know if this is impacting anything other than my sanity and perhaps status of tunnel as shown in SmartView Monitor, but it is annoying.
Dave
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I never managed to actually solve it, I just made a specific anti-log rule for the involved gateways so it's not a waste of log capacity. The rule has 14 million hits in about 6 months.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Maybe "Tunnel Testing fails after an upgrade" can be relevant here ?
Jozko Mrkvicka
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
We only have this issue with SMB gateways, so the fixed JHF's / SK is not relevant.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
We have a meshed VPN community between two Gaia clusters running R81.20 JFHA take 84, and we still see this issue. These same Gaia clusters are also part of a star VPN community where they are the central gateway and SMB devices are the satellites. We also see the problem here. Even stranger - since we upgraded the Gaia clusters to R81.20 (they were previously running R81.10) I do not even see them listening on port 18234 (netstat -anp | grep :18234 returns nothing) even though I can see tunnel test traffic on UDP 18234 going to/from these gateways. It just gets weirder.
Dave
