- Products
- Learn
- Local User Groups
- Partners
- More
Access Control and Threat Prevention Best Practices
5 November @ 5pm CET / 11am ET
Ask Check Point Threat Intelligence Anything!
October 28th, 9am ET / 3pm CET
Check Point Named Leader
2025 Gartner® Magic Quadrant™ for Hybrid Mesh Firewall
HTTPS Inspection
Help us to understand your needs better
CheckMates Go:
Spark Management Portal and More!
Over the past few weeks i have forklifted (4) 5xxx clusters to 9100. Spoiler alert - UPPAK has been disabled everywhere.
All 9100s have nothing special added to the order, just a LOM card, no interface bonding, no vlans. All were installed with R81.20 with the latest recommended hotfix.
First cluster was a 5600 forklift. This one has the most active connections (typically between 35,000 - 50,000 during the day) but the the bandwidth usage isn't anything exorbitant. No issues with connectivity after the upgrade, but i noticed that TX-DRP were increasing at an alarming rate - a few hundred thousand per day. On the 5600 cluster, netstat counters always remained pretty clean. I found this checkmates thread, reverted to KPPAK and after a week, netstat counters are back to being clean - no other change but UPPAK -> KPPAK: https://community.checkpoint.com/t5/Security-Gateways/Packet-timeout-with-unknown-reason-in-Quantum-...
Second cluster was another 5600. This is by far our largest location from a bandwidth usage perspective. All of our sites are configured in a single vpn community and all locations are physical appliances except one cloudguard instance in azure. When this site was forklifted to a 9100, all tunnels came up except the one to azure. Tried all the normal vpn troubleshooting steps, nothing, nada, tunnel to azure remained down. I then found this phoneboy podcast with tim hall which mentioned there could be weird vpn behavior with UPPAK - reverted to KPPAK and the tunnel came up immediately and no issues since. For reference, here is the podcast i was referring to: https://community.checkpoint.com/t5/CheckMates-Go-Cyber-Security/S07E03-What-is-UPPAK/ba-p/245115
Last two sites are pretty vanilla - not much bandwidth usage, typical connections are around 8K. No issues noted since forklifting from 5400s. But considering the issues that i had with the first two sites, i changed both of these clusters to KPPAK after a few weeks.
Just wanted to provide my observations, not looking for any troubleshooting ideas as i won't be putting any of these sites back on UPPAK on R81.20. WIll see what happens when we upgrade to either r82 or r82.10.
Thanks for that @D_TK . I know in R82, default mode is indeed user mode. Based on my lab testing, so far, seems okay.
Andy
Glad to hear the podcast I did with @PhoneBoy helped you out with your Azure VPN issue. In regard to this:
> Just wanted to provide my observations, not looking for any troubleshooting ideas as i won't be putting any of these sites back on UPPAK on R81.20. WIll see what happens when we upgrade to either r82 or r82.10.
The plan at the moment is for UPPAK to be mandatory on all platforms in R82.10 (not just Lightspeed/Quantum Force) as announced here, however R82.10 is still in private EA. As far as I can tell the kernel-based SecureXL driver (sim) and its associated KPPAK infrastructure is not even present in the Private EA R82.10 code, at least that I can see.
Sounds like R&D took the "burn the boats" approach. 🙂
Thanks for the feedback on the podcast. 🙂
Hey all,
I'm really glad I came across this issue here, as I was just replacing a customer's 5XXX series firewalls with 3970's. The problem there, is the 3970 architecture only supports r82.10 and above. We experienced heavy latency issues during the cutover which resolved immediately after cutting back. I've been working with Checkpoint since Tuesday on this and we've just came to the conclusion that KPPAK mode can't be switched in R82.10. Luckily, those 9100's come with R81.20, so we may reach back out to Checkpoint and have to switch this customer out with devices that would actually function. Thanks again for posting this D_TK.
There is not nor will there be a KPPAK in R82.10, just FYI.
Yea, I got that from the articles I read online. If this is the case, then there isn't a reason to move to the 3900 series until such a time that R82.10 is moved into GA and their bug issues are fixed. I'll have to work with Checkpoint and our sales team to hopefully switch out the devices with ones capable of being in production in the customer's network.
The reason that the 3900s must use R82.10 is that they have ARM processors instead of Intel. The 4.18 kernel used in R82 does not support ARM, so version R82.10 updates the Gaia kernel yet again to 5.14 to obtain ARM support.
Can I ask you if it was a general issue or with specific blades?
We have deployed 3900 series for a segmentation project but the traffic has not yet been redirected. We will only use the FW blade to begin with, no TP or application features.
Hi Alex,
From what I deduced during the cut-over, anything going outbound through that firewall was having a latency issue. What I could see during this time, was large amounts of tx errors on the devices. We changed ports on the firewall and on the switches in question and found that after a few moments, it would start occurring again. We also couldn't access the devices via their external interfaces which we believed to be related to a policy install error, but this issue occurred EVERY time we reinstalled the policy, whether accelerated or not, even after clearing the policy from the device. The device in question runs over 30+ VPNT's to other locations to send OSPF and route people's traffic up to a citrix server. We know it isn't the server itself as internally, it worked fine. Anyone accessing it externally though, or going from that device outbound, had large amounts of lag and latency issues. All issues resolved when we cut back to their 5000 series firewalls. My ticket with Checkpoint had recommended that we switched to KPPAK, but then I was told shortly after word that the 3900 series cannot not and will not be able to do so. Right now, we are working with our Checkpoint Salesperson to inquire about a change of hardware.
Let me know if this helps.
We have Azure CloudGuard tunnels behaving somewhat weirdly on a local cluster of 9200 running UPPAK, troubleshoot didn't give much so far.
Thanks for the report, we will look into switching them to KPPAK.
And some 5K series being soon replaced by 3K with R82.10.
sk183557 - Jumbo Hotfix Accumulator for Quantum Force 3900 Appliances
Was sent this recently by TAC. Posting this here for awareness. Hopefully this can resolve a few people's issues. Looks to have been updated 2 days ago.
Thank you!
Hey @Sbolton ...apologies if this may sound like a silly question, but did they say if someone was on higher jumbo than 22, are all these fixes included?
I assume they would be, but just wanted to make sure...
Andy
There is no jumbo higher than 22 for 3900s with R82.10.
Thanks Emma, I realized that after I posted my response.
Andy
Leaderboard
Epsum factorial non deposit quid pro quo hic escorol.
User | Count |
---|---|
19 | |
17 | |
12 | |
11 | |
9 | |
8 | |
8 | |
8 | |
5 | |
5 |
Wed 22 Oct 2025 @ 11:00 AM (EDT)
Firewall Uptime, Reimagined: How AIOps Simplifies Operations and Prevents OutagesTue 28 Oct 2025 @ 11:00 AM (EDT)
Under the Hood: CloudGuard Network Security for Google Cloud Network Security Integration - OverviewWed 22 Oct 2025 @ 11:00 AM (EDT)
Firewall Uptime, Reimagined: How AIOps Simplifies Operations and Prevents OutagesTue 28 Oct 2025 @ 11:00 AM (EDT)
Under the Hood: CloudGuard Network Security for Google Cloud Network Security Integration - OverviewWed 05 Nov 2025 @ 11:00 AM (EST)
TechTalk: Access Control and Threat Prevention Best PracticesAbout CheckMates
Learn Check Point
Advanced Learning
YOU DESERVE THE BEST SECURITY