- Products
- Learn
- Local User Groups
- Partners
- More
The State of Ransomware Q1 2026
Key Trends and Their Impact
Good, Better, Best:
Prioritizing Defenses Against Credential Abuse
AI Security Masters E7:
How CPR Broke ChatGPT's Isolation and What It Means for You
Blueprint Architecture for Securing
The AI Factory & AI Data Center
Call For Papers
Your Expertise. Our Stage
CheckMates Go:
CheckMates Fest
Hello
accidentally, for the firewall infrastructure of a customer we ran into the issue reported in following SK:
https://support.checkpoint.com/results/sk/sk184766#
For the specific customer we planned to install the suggested hotfix.
Hope it could be useful.
Very frustrating that except for the onliner above no information about the dfifferent takes is given. And per your example not only MDM requires higher takes
Check Point, wake up!
Silent updates are disingenuous to your customers. This is an unprofessional practice. I see a pattern of poor quality releases, which I am not happy about.
Writing code is hard, and you might not get it right the first time, but don't try and cover it up. Be honest and forthcoming if there is a new release. For major issues like this, there should be an incident page with frequent updates that everyone can look at. Making your customers check for SK articles updates for active incidents is poor customer support.
I see a pattern where Check Point are trying to downplay a serious issue, causing problems for many customers. Using words like 'some customers may be affected' makes it seem like it's not a big issue - when it really is, because this impacts so many parts of the product and the symptoms are wide ranging.
This debacle has made me quite angry.
A very long time ago, I was a Sys Admin for a very large (15,000 users) NetWare 4 deployment at a University campus. Novell released a Service Pack with a broken driver which resulted in a Server ABEND (Crash) under heavy load. A week or so after releasing the faulty Service Pack, they silently updated it with the fixed driver. But they never updated the release notes or told anyone... Everybody who'd download the first release potentially suffered the same chaos, with servers randomly rebooting.
Three decades later it still leaves a very bitter taste in the mouth... I still get cold sweats just thinking about the days & hours I spent troubleshooting it... Years later I spoke to someone who worked for Novell, and internally they knew the trouble it caused, but they concluded that staying quiet was best for their corporate reputation. Unforgivable!
If Check Point wants to keep the trust of its customers, then it's vital for Check Point to stay open and transparent and improve how it communicates with its loyal customers when major problems are discovered. It's going to happen from time to time, and how Check Point responds is key. Don't become another Novell. Look what happened to them...
Yes you are totally right ...
trust is outmost important ... errors can and will happen, its just the question how you deal with issues like that in a professional way ... customers do appreciate a transparent and honest management of problems ...
Agreed 100%. Just as disappointing is the lack of response to your post from any Check Point employee. It's like they are doubling down on poor customer service. Reminds me of the cpsho_user debacle (Solved: cpsho_user config pushed from Check Point? - Check Point CheckMates)
Dave
After patching my clusterxl 3970 appliances with r82.10 take464 with T9 fix and my mgmt open server r82 take60 with T6,setup works and certs are ok now but 🙂 there is always some but in story,now when i try from gaia webui to check for updates in CPUSE
on my r82.10 appliances i got this error:
and on my mgmt it works ,i have already opened case but TAC is silent about this ,"they are working on it"
So if any one knows please help
These is a new Deployment Agent version that they are phasing in. If you want to install it now it is version 2742. I opened an SR and they provided the file: DeploymentAgent_000002742_1.tgz. After installing it, my gateways, sms, and log servers could again check for updates.
A quick and dirty workaround for this issue is to temporarily move the gw or mgmt clock 24h ahead in the future.
Yep, it's that easy.
That's far off from being an ideal workaround, but if you're stuck knee deep in this where-did-my-Feb-29th-go craze, that's worth a try.
Moving the clock forward should cancel the bug in every piece of yet to be fixed Check Point code.
Works for:
- the CPUSE "Connection Error, FDT - Unexpected error code" issue
- new gateways unable to download their license
- new gateways unable to connect to Smart-1 Cloud / MaaS
Again, not ideal, but worth a try.
If your equipment is going through initial setup or planned maintenance, the system date being off by 1 day is probably not that bad.
This whole March 1st bug it totally effed up.
Hello,
yes really after changing the time, the CPUSE immediately started to work ...
interesting stuff ... but changing the time worked pretty well for CPUSE ...
best regeards
Also ...
on Quantum Spark, it seems the Reach my Device is no longer working ...
Firmware, R82.00.10 (998001654)
The Page is just not loading.
anybody encountered the same issue?
on a patched systems is that:
[Expert@ZZZZZZ01]# ps -ef | grep ssh
root 3395 2930 0 Mar05 ? 00:00:00 sshd: /pfrm2.0/bin/sshd -f /pfrm2.0/etc/sshd_config -p 22 -D [listener] 0 of 10-10 startups
root 4901 31877 0 13:37 pts/1 00:00:00 grep ssh
when not patched its:
[Expert@XXXXX01]# ps -ef | grep ssh
root 5120 3983 0 Feb17 ? 00:00:00 sshd: /pfrm2.0/bin/sshd -f /pfrm2.0/etc/sshd_config -p 22 -D [listener] 0 of 10-10 startups
root 8233 1 0 Feb17 ? 00:00:35 /bin/bash /opt/fw1/bin/access_service_ssh_tunnel.sh
this thing is not running.
"access_service_ssh_tunnel.sh"
thats not funny.
Build 998001654 is NOT the patched one.
(I guess you mistyped it and actually meant 998001564 -- still not the patched one.)
You should probably switch to build 998002110 instead.
Hello,
yes you right ... i just figured this out ...
i notified my customers on 4th March about the patched firmware .. .this was version
"998001654"
Today at 16th "998002110"
well highly annoying, the SK doesnt mention any version change or release notes ...
this creates again workload to all customers who were fast and installed a "wrong" version of the firmware ...
on 4th March it was:
now on 16th March its
btw with 998002110, REACH MY DEVICE works again!!!
best regards
We have two in productive Check Point environments on R81.20 and we got following issue:
[Expert@xxx:0]# curl_cli -vvvk https://updates.checkpoint.com
* Rebuilt URL to: https://updates.checkpoint.com/
* Trying 18.245.31.62...
* TCP_NODELAY set
* Connected to updates.checkpoint.com (18.245.31.62) port 443 (#0)
* ALPN, offering http/1.1
* *** Current date is: Wed Mar 11 08:08:38 2026
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* err is -1, detail is 2
* *** Current date is: Wed Mar 11 08:08:38 2026
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, [no content] (0):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* err is -1, detail is 2
* *** Current date is: Wed Mar 11 08:08:38 2026
* TLSv1.3 (IN), TLS handshake, [no content] (0):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, [no content] (0):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, [no content] (0):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, [no content] (0):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_128_GCM_SHA256
* ALPN, server accepted to use http/1.1
* servercert: Activated
* servercert: CRL validation was disabled
* Server certificate:
* subject: CN=*.checkpoint.com
* start date: Jun 3 12:12:04 2025 GMT
* expire date: Jul 5 12:12:03 2026 GMT
* issuer: C=BE; O=GlobalSign nv-sa; CN=GlobalSign GCC R3 DV TLS CA 2020
* SSL certificate verify result: unable to get local issuer certificate (20), continuing anyway.
* servercert: Finished
* TLSv1.3 (OUT), TLS app data, [no content] (0):
* TLSv1.3 (IN), TLS handshake, [no content] (0):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS app data, [no content] (0):
< HTTP/1.1 404 Not Found
< Content-Type: text/plain; charset=utf-8
< Content-Length: 15
< Connection: keep-alive
< Date: Wed, 11 Mar 2026 07:08:38 GMT
< Server: awselb/2.0
< X-Cache: Error from cloudfront
< Via: 1.1 b7c8b552077b93dc0acaa0b82d11fa62.cloudfront.net (CloudFront)
< X-Amz-Cf-Pop: FRA56-P8
< X-Amz-Cf-Id: PQ_vDANXFlWawcxX7u_5xnyqfAn9Vj7eMRvxZyJL1HAB9AY9qinhEg==
<
* Connection #0 to host updates.checkpoint.com left intact
This seems unrelated to this discussion.
The OpenSSL default configuration is notoriously broken on Check Point gateways, and you need to manually specify the path to a valid root certificate store (see sk110779).
Try this :
curl_cli -vvvk --cacert $CPDIR/conf/ca-bundle.crt https://updates.checkpoint.com
(and you may just as well remove the -k option now)
Today i checked the SK again for this issue ... and now i see again a new Hotfix?
Check_Point_R82_JHF_T60_TIME_FIX_655_MAIN_Bundle_T11_FULL.tar
on my customers i have installed
Check_Point_R82_JHF_T60_TIME_FIX_655_MAIN_Bundle_T2_FULL.tar
so what is T2 and why is now T11 for the same issue?
how many version of this CRL hotfixes were published between version T2 and version T11?
Why does Check Point changes the hotfixes for the same issue?
why is there no clear documentation about each hotfix generation?
Are earlier issues for this hotfixes prone to errors?
i received alot of calls from customers regarding VPN issues ... they all seem to start after installing these CRL hotfixes ...
and i see with this T2 version, CPUSE does not work well ...
Hi,
If we have Check_Point_R82_JHF_T60_TIME_FIX_655_MAIN_Bundle_T2_FULL.tar installed is it possible to install r82 take 91 directly or we need to uninstall the specific crl hotfix before ?
Always uninstall hotfixes first, as they are only for the respective JHF.
You can install JHF t91 over the top of the CRL fix, because it contains that same fix. Run the verify option to make sure.
Verification says its ok to install take 91 without installation.
Ive heard both suggestions. TAC said that we should uninstall.
I ended up uninstalling all of the crl fixes until there were none applied, then applying the latest.
Uninstalling hotfixes before installing a new jumbo is typically safe (there are rare exceptions), but when the jumbo you're installing includes the hotfix, it's unnecessary. Verification of the jumbo will tell you if you need to uninstall a hotfix or not.
Problem is, there are multiple releases (Takes) of the CRL patch which all have the same version number.
Definitely best to uninstall the CRL hotfix, before installing the latest recommended JHFA.
Leaderboard
Epsum factorial non deposit quid pro quo hic escorol.
| User | Count |
|---|---|
| 34 | |
| 11 | |
| 10 | |
| 10 | |
| 10 | |
| 8 | |
| 7 | |
| 7 | |
| 6 | |
| 6 |
Thu 30 Apr 2026 @ 03:00 PM (PDT)
Hillsboro, OR: Securing The AI Transformation and Exposure ManagementTue 12 May 2026 @ 10:00 AM (CEST)
The Cloud Architects Series: Check Point Cloud Firewall delivered as a serviceTue 12 May 2026 @ 10:00 AM (CEST)
The Cloud Architects Series: Check Point Cloud Firewall delivered as a serviceThu 30 Apr 2026 @ 03:00 PM (PDT)
Hillsboro, OR: Securing The AI Transformation and Exposure ManagementAbout CheckMates
Learn Check Point
Advanced Learning
YOU DESERVE THE BEST SECURITY