- Products
- Learn
- Local User Groups
- Partners
- More
Quantum Spark Management Unleashed!
Introducing Check Point Quantum Spark 2500:
Smarter Security, Faster Connectivity, and Simpler MSP Management!
Check Point Named Leader
2025 Gartner® Magic Quadrant™ for Hybrid Mesh Firewall
HTTPS Inspection
Help us to understand your needs better
CheckMates Go:
SharePoint CVEs and More!
Has anyone else experienced issues with the fw passing traffic when updating to take 196?
Before we did not have to push policy when updating minor takes (ex 155 to 196)
However we ran into issues last night where we updated to take 196 and new connections were not being accepted until we pushed policy post patch and reboot.
Our assumption was since the fwpol is cached on the fw and loaded on reboot, that a policy push shouldn't be mandatory post patch.
v/r,
Jon
(we have opened a tac case for our RFO)
We saw this on our VSX upgrades recently to take 196 (three separate clusters had the issue). Our upgrades were from 155 to 196.
Also resolved with policy install. Oddly it did not impact all VS, only some.
Fortunately I found a community post at the time as I was scratching my head what had happened.
Didn't bother with TAC case as I had completed all of the required upgrades but it certainly seems to be an issue for multiple Checkpoint customers.
We saw this on our VSX upgrades recently to take 196 (three separate clusters had the issue). Our upgrades were from 155 to 196.
Also resolved with policy install. Oddly it did not impact all VS, only some.
Fortunately I found a community post at the time as I was scratching my head what had happened.
Didn't bother with TAC case as I had completed all of the required upgrades but it certainly seems to be an issue for multiple Checkpoint customers.
Hi!
We had the same issue on a VSX-Cluster. We did not have an overal outage, but most allowed traffic was blocked by the last clean-up-rule. Manual policy-installation did not help. We uninstalled take 196 again and opened a TAC case.
Martin
Yesterday we deployed the same take (196) in our R80.30 Kernel 2.6 VSX HA Cluster of two 23500.
After installing JHF on the last member sync was broken and completly corrupted in that member (Active/Down. So far we didn't manage to uninstall JHF since we ran out of time in the maintenance window.
Case opened: 6-0002039948
Hi!
Did you get an update from the TAC-Team? What did they analyze till now?
Best regards
Martin
Is it on 2.6 or 3.10? I've deployed Take 196 in large VSX environments without apparent issues, running R80.30 3.10, in case it would make a difference.
k3.10
firewall-1 ~ # cpinfo -yall
This is Check Point CPinfo Build 914000202 for GAIA
[IDA]
No hotfixes..
[MGMT]
No hotfixes..
[CPFC]
HOTFIX_R80_30_GOGO_JHF_MAIN Take: 155
[FW1]
HOTFIX_MAAS_TUNNEL_AUTOUPDATE
HOTFIX_R80_30_GOGO_JHF_MAIN Take: 155
FW1 build number:
This is Check Point's software version R80.30 - Build 001
kernel: R80.30 - Build 159
[SecurePlatform]
HOTFIX_R80_30_GOGO_JHF_MAIN Take: 155
[PPACK]
HOTFIX_R80_30_GOGO_JHF_MAIN Take: 155
[CPinfo]
No hotfixes..
[CPUpdates]
BUNDLE_MAAS_TUNNEL_AUTOUPDATE Take: 25
BUNDLE_CPINFO Take: 50
BUNDLE_INFRA_AUTOUPDATE Take: 32
BUNDLE_DEP_INSTALLER_AUTOUPDATE Take: 13
BUNDLE_R80_30_JUMBO_HF_MAIN_3_10_GW Take: 155
[AutoUpdater]
No hotfixes..
[DIAG]
No hotfixes..
[CVPN]
No hotfixes..
[CPDepInst]
No hotfixes..
Hi All.
My Name is Yifat Chen and i am managing R80.30 Jumbo releases in Check Point.
Thanks for all the details you shared here, we have a ticket associated to the issue and we will update here ASAP with our findings
Release Management Group
Hi,
are there any update regarding this issue?
We are planning to update our GW to Jumbo 196.
Thanks and best regards
Tobias
TAC:
"When upgrading the jumbo hotfix on a gateway pushing policy is not a required step. The gateway should load the last successfully pushed policy post-reboot.
However, if encountering traffic issues post hot-fix installation one of the first steps recommended would be pushing policy. "
Hi!
As decribed above, in our case this did not help.
We pushed the policy after upgrading to jumbo-take 196 + reboot, but the policy was not enforced proper.
Best regards Martin
Hello!
Please can you share your internal findings? Can we be sure, that this was a kind of bug in JT196?
Best regards
Martin
Hi All,
My apology for the late response.
As was reported in this thread, we've tried to reproduce this issue in our lab in order to investigate it, but had no luck. Also, there was no other complains/ tickets to TAC about this issue in our later Jumbo Takes.
Our recommendation is to use the latest GA Jumbo Take (currently it's #215 but the latest ongoing take #217 will be moved to GA soon).
In case you will face the same issue in the future, the best will be to open a ticket with our support and they will debug/troubleshoot the problem while the issue is occurring.
I can assure you that we will take these issues seriously, and will be available to assist whenever it is required.
Thanks,
Release Management Group
I just want to share that we had the same problem on all our non-VSX HA clusters and T191
So it seems like the problem was first introduced with T191 and is not VSX related.
Details if needed:
Source Version:
R80.30 Gaia 2.6 JHF 155
Target Version:
R80.30 Gaia 2.6 JHF T191
Symptoms excactly as decribed here:
After having the same experience with our first two clusters, we changed the workflow for all following ones to avoid the problem:
We tried a reboot later some time to reproduce the problem, but it was not reproducable. So it only occured after first boot after minor version update.
Hi,
We also had some problems with JHA Take 196, first the same problem with no proper policy enforcement which was solved pushing policy. However some days later, the active member in a ClusterXL was lost, had no connectivity at all, we had to connect trough console port and we saw that the appliance was in this loop:
[<ffffffff802447c0>] (i8042_interrupt+0x0/0x240)
Disabling IRQ #1
BUG: soft lockup - CPU#0 stuck for 10s! [kseriod:107]
BUG: soft lockup - CPU#0 stuck for 10s! [kseriod:107]
BUG: soft lockup - CPU#0 stuck for 10s! [kseriod:107]
BUG: soft lockup - CPU#0 stuck for 10s! [kseriod:107]
BUG: soft lockup - CPU#0 stuck for 10s! [kseriod:107]
BUG: soft lockup - CPU#0 stuck for 10s! [kseriod:107]
BUG: soft lockup - CPU#0 stuck for 10s! [kseriod:107]
BUG: soft lockup - CPU#0 stuck for 10s! [kseriod:107]
BUG: soft lockup - CPU#0 stuck for 10s! [kseriod:107]
BUG: soft lockup - CPU#0 stuck for 10s! [kseriod:107]
BUG: soft lockup - CPU#0 stuck for 10s! [kseriod:107]
BUG: soft lockup - CPU#0 stuck for 10s! [kseriod:107]
BUG: soft lockup - CPU#0 stuck for 10s! [kseriod:107]
BUG: soft lockup - CPU#0 stuck for 10s! [kseriod:107]
BUG: soft lockup - CPU#0 stuck for 10s! [kseriod:107]
BUG: soft lockup - CPU#0 stuck for 10s! [kseriod:107]
BUG: soft lockup - CPU#0 stuck for 10s! [kseriod:107]
irq 1: nobody cared (try booting with the "irqpoll" option)
handlers:
[<ffffffff802447c0>] (i8042_interrupt+0x0/0x240)
Disabling IRQ #1
We have experienced this in two clusters, TAC suggested to upgrade to JHA ongoing Take 215 (not GA Take 214). That solved the issue until now. Some other problems with with management server, solved with jumbo 215 as well.
Leaderboard
Epsum factorial non deposit quid pro quo hic escorol.
User | Count |
---|---|
11 | |
7 | |
7 | |
6 | |
6 | |
5 | |
5 | |
5 | |
5 | |
4 |
Wed 10 Sep 2025 @ 11:00 AM (CEST)
Effortless Web Application & API Security with AI-Powered WAF, an intro to CloudGuard WAFWed 10 Sep 2025 @ 11:00 AM (EDT)
Quantum Spark Management Unleashed: Hands-On TechTalk for MSPs Managing SMB NetworksFri 12 Sep 2025 @ 10:00 AM (CEST)
CheckMates Live Netherlands - Sessie 38: Harmony Email & CollaborationWed 10 Sep 2025 @ 11:00 AM (EDT)
Quantum Spark Management Unleashed: Hands-On TechTalk for MSPs Managing SMB NetworksFri 12 Sep 2025 @ 10:00 AM (CEST)
CheckMates Live Netherlands - Sessie 38: Harmony Email & CollaborationAbout CheckMates
Learn Check Point
Advanced Learning
YOU DESERVE THE BEST SECURITY