- Products
- Learn
- Local User Groups
- Partners
- More
Quantum Spark Management Unleashed!
Check Point Named Leader
2025 Gartner® Magic Quadrant™ for Hybrid Mesh Firewall
HTTPS Inspection
Help us to understand your needs better
CheckMates Go:
SharePoint CVEs and More!
Good afternoon, everyone.
There are 2 SMB1530 in the cluster with strange behavior.
After the power is turned off and the subsequent power-on occurs:
1. Devices are being loaded;
2. The management console sees them;
3. There is no connection with the equipment connected to this cluster;
4. The policy is being installed from the management console - network connectivity appears.
It looks like the default policy is being installed.
Unfortunately, the devices are found on a remote site and I do not yet understand what is better to do for diagnostics, the only idea is to reinstall the image.
Which version are the devices running / installed with?
Current image name: R80_992002467_20_35
Current image version: 467
Default image name: R80_992000884_20_01
Default image version: 884
kernel: R80.20.35 - Build 344
What about upgrading firmware to a more recent version (.40, .50) ? I would suggest to test it without cluster configured to exclude clustering as cause of the issue. Like a SMB that after reboot always entered FTW this looks like a possible RMA.
I would like to understand that the update will definitely help, since it is difficult to justify to the management, especially since the devices are installed far away and in case of problems you will have to go to the place.
I agree the cluster can influence, but I have not found any similar situations, mentions and the object where the devices are installed requires HA.
Is there a different situation if the other SMB is the active node ? HW failure on both SMBs are not impossible, but i would not assume that first. In your situation, any possible solution is better than the current situation that the GW is down for a longer time until policy install after every power outage. I would rather test a new 1530 cluster in Lab, exchange the surely working SMB pair for the cluster with the issue and put that in Lab for tests / replacement.
This happens constantly or started recently? Any changes made?
After each power outage.
The first time was 03.11.2022.
No changes.
I can tell you that even if you opened TAC case for this, most likely, they will give you 3 options (ok, maybe 4, depending on outcome of first 3):
1) upgrade firmware
2) reboot
3) reinstall
4) RMA if none of those work
- Upgrade firmware only makes sense if the issue is known and fixed in the version, so it does not apply !
- Reboot is part of the issue here, so it does not apply !
- Reinstall does not make much sense here; firmware install from USB medium does check the flash for bad sectors and sometimes can resolve issue
Agree with you 100%, I was more referring generally what steps would have been suggested : - )
TAC insists on updating even without any diagnostic steps or explanations.
Moreover, there is no guarantee that the problem will be solved after the update.
If you turn off both devices and first turn on only 1 then the other, then everything is fine.
Do the Firewalls boot more quickly than the adjacent switches and by how much?
If this describes your symptoms you should atleast try the current build of R80.20.35 (992002613) if not R80.20.40 or higher
The test was carried out without disconnecting the switch to which the cluster is connected, i.e. only devices in the cluster - 2 gateways were shut down.
one more think:
according sk170534 GA for 1500/1600/1800 R80.20.20 but
according to Ssk170534 GA for 1500/1600/1800 R80.20.20, but the engineer TAC claims R80.20.50
one more think from show logs kernel:
mvpp2 f2000000.ethernet WAN: bad rx status 16018510 (crc error), size=1340
judging by the messages, there is a policy, but a VPN is not installed:
2022 Dec 5 11:55:34 Dec authpriv.notice 05: 11:55:33+03:00 172.25.31.3 Action="drop" inzone="Internal" outzone="External" service_id="domain-udp" src="172.25.31.201" dst="8.8.8.8" proto="17" UP_match_table="TABLE_START" ROW_START="0" match_id="58" layer_uuid="b3e4b8dd-d54c-4659-b7ca-5039524c22ae" layer_name="KAS_AZS_Rules Network" rule_uid="f29c3f25-16cf-46c8-a209-93f3951a3a15" rule_name="Cleanup rule" ROW_END="0" UP_match_table="TABLE_END" UP_action_table="TABLE_START" ROW_START="0" action="0" ROW_END="0" UP_action_table="TABLE_END" ProductName="VPN-1 & FireWall-1" svc="53" sport_svc="15884" ProductFamily="" 2022 Dec 5 11:55:34 Dec authpriv.notice 05: 11:55:34+03:00 172.25.31.3 Action="encrypt" inzone="Internal" outzone="External" service_id="domain-udp" src="172.25.31.26" dst="192.168.234.196" proto="17" scheme:="IKE" methods:="ESP: AES-256 + SHA256 + PFS (group 14)" peer gateway="192.168.234.180" encryption failure:="" partner="" community="KAS-AZS-VPN" fw_subproduct="VPN-1" vpn_feature_name="VPN" UP_match_table="TABLE_START" ROW_START="0" match_id="12" layer_uuid="b3e4b8dd-d54c-4659-b7ca-5039524c22ae" layer_name="KAS_AZS_Rules Network" rule_uid="f7e66fdb-2821-4ca3-83cd-0234f6935740" rule_name="RULE_AZS_to_DC" ROW_END="0" UP_match_table="TABLE_END" ProductName="VPN-1 & FireWall-1" svc="53" sport_svc="57711" ProductFamily="" 2022 Dec 5 11:55:36 Dec authpriv.notice 05: 11:55:34+03:00 172.25.31.3 Action="encrypt" inzone="Internal" outzone="External" service_id="domain-udp" src="172.25.31.26" dst="192.168.234.196" proto="17" scheme:="IKE" methods:="ESP: AES-256 + SHA256 + PFS (group 14)" peer gateway="192.168.234.180" encryption failure:="" partner="" community="KAS-AZS-VPN" fw_subproduct="VPN-1" vpn_feature_name="VPN" UP_match_table="TABLE_START" ROW_START="0" match_id="12" layer_uuid="b3e4b8dd-d54c-4659-b7ca-5039524c22ae" layer_name="KAS_AZS_Rules Network" rule_uid="f7e66fdb-2821-4ca3-83cd-0234f6935740" rule_name="RULE_AZS_to_DC" ROW_END="0" UP_match_table="TABLE_END" ProductName="VPN-1 & FireWall-1" svc="53" sport_svc="64141" ProductFamily="" 2022 Dec 5 11:55:36 Dec authpriv.notice 05: 11:55:34+03:00 172.25.31.3 Action="drop" inzone="Internal" outzone="External" service_id="domain-udp" src="172.25.31.120" dst="192.168.234.197" proto="17" UP_match_table="TABLE_START" ROW_START="0" match_id="58" layer_uuid="b3e4b8dd-d54c-4659-b7ca-5039524c22ae" layer_name="KAS_AZS_Rules Network" rule_uid="f29c3f25-16cf-46c8-a209-93f3951a3a15" rule_name="Cleanup rule" ROW_END="0" UP_match_table="TABLE_END" UP_action_table="TABLE_START" ROW_START="0" action="0" ROW_END="0" UP_action_table="TABLE_END" ProductName="VPN-1 & FireWall-1" svc="53" sport_svc="44088" ProductFamily="" 2022 Dec 5 11:55:38 Dec authpriv.notice 05: 11:55:36+03:00 172.25.31.3 Action="drop" inzone="Internal" outzone="External" service_id="domain-udp" src="172.25.31.4" dst="10.135.130.13" proto="17" UP_match_table="TABLE_START" ROW_START="0" match_id="58" layer_uuid="b3e4b8dd-d54c-4659-b7ca-5039524c22ae" layer_name="KAS_AZS_Rules Network" rule_uid="f29c3f25-16cf-46c8-a209-93f3951a3a15" rule_name="Cleanup rule" ROW_END="0" UP_match_table="TABLE_END" UP_action_table="TABLE_START" ROW_START="0" action="0" ROW_END="0" UP_action_table="TABLE_END" ProductName="VPN-1 & FireWall-1" svc="53" sport_svc="50655" ProductFamily="" 2022 Dec 5 11:55:38 Dec authpriv.notice 05: 11:55:36+03:00 172.25.31.3 Action="encrypt" inzone="Internal" outzone="External" service_id="domain-udp" src="172.25.31.26" dst="192.168.234.196" proto="17" scheme:="IKE" methods:="ESP: AES-256 + SHA256 + PFS (group 14)" peer gateway="192.168.234.180" encryption failure:="" partner="" community="KAS-AZS-VPN" fw_subproduct="VPN-1" vpn_feature_name="VPN" UP_match_table="TABLE_START" ROW_START="0" match_id="12" layer_uuid="b3e4b8dd-d54c-4659-b7ca-5039524c22ae" layer_name="KAS_AZS_Rules Network" rule_uid="f7e66fdb-2821-4ca3-83cd-0234f6935740" rule_name="RULE_AZS_to_DC" ROW_END="0" UP_match_table="TABLE_END" ProductName="VPN-1 & FireWall-1" svc="53" sport_svc="59734" ProductFamily="" 2022 Dec 5 11:55:38 Dec authpriv.notice 05: 11:55:37+03:00 172.25.31.3 Action="accept" inzone="Local" outzone="External" service_id="FW1_log" src="172.25.252.231" dst="192.168.234.134" proto="6" UP_match_table="TABLE_START" ROW_START="0" match_id="0" layer_uuid="b3e4b8dd-d54c-4659-b7ca-5039524c22ae" layer_name="KAS_AZS_Rules Network" rule_uid="0E3B6801-8AB0-4b1e-A3
but after installing the policy from SmartConsole, the VPN connects and everything is fine.
What should happen when a gateway boots is the following:
Based on this and what you’ve described, I’m guessing you’re experiencing a variant of SMB-16203.
In which case, upgrading your firmware to a later release is highly recommended.
If the device returns to factory settings, it shouldn’t get any policy.
However, I have seen instances reported on the community where problems randomly occur that are fixed by a policy install from the management.
Which suggests files are getting deleted along the way that shouldn’t be, which is what this bug is ultimately about; Reverting to Factory Defaults is merely one possible symptom.
See: https://community.checkpoint.com/t5/Remote-Access-VPN/CheckPoint-Mobile-Ike-Cert-error/m-p/157350#M7...
If you’re getting an RMA, then I recommend making sure the latest firmware is loaded on the device before putting it into production.
Thank you all for your help, now the TAC engineer, without going into further diagnostics (which leads to an update, which is not a fact that will solve the issue), suggested RMA, I am waiting for approval from the management and hope that everything will be fine.
I remember while ago when I called for a customer for old 1100 appliance and they were having some weird issues and TAC guy says to me "Buddy, all we do for those boxes is 3 R rule...reboot, reinstall and rma, thats it". I wanted to say he was joking, but I dont think he was : - )
Mind you, these SMB boxes are way better than 1100, but as @PhoneBoy advised, if it cant be upgraded, then sounds like RMA indeed is your only option left.
First step is reboot. Upgrade to newer firmware is next, then flash firmware from USB ! If these three do not help, RMA...
Leaderboard
Epsum factorial non deposit quid pro quo hic escorol.
User | Count |
---|---|
12 | |
4 | |
3 | |
3 | |
3 | |
3 | |
2 | |
2 | |
2 | |
1 |
Thu 18 Sep 2025 @ 03:00 PM (CEST)
Bridge the Unmanaged Device Gap with Enterprise Browser - EMEAThu 18 Sep 2025 @ 02:00 PM (EDT)
Bridge the Unmanaged Device Gap with Enterprise Browser - AmericasMon 22 Sep 2025 @ 03:00 PM (CEST)
Defending Hyperconnected AI-Driven Networks with Hybrid Mesh Security EMEAMon 22 Sep 2025 @ 02:00 PM (EDT)
Defending Hyperconnected AI-Driven Networks with Hybrid Mesh Security AMERThu 18 Sep 2025 @ 03:00 PM (CEST)
Bridge the Unmanaged Device Gap with Enterprise Browser - EMEAThu 18 Sep 2025 @ 02:00 PM (EDT)
Bridge the Unmanaged Device Gap with Enterprise Browser - AmericasMon 22 Sep 2025 @ 03:00 PM (CEST)
Defending Hyperconnected AI-Driven Networks with Hybrid Mesh Security EMEAAbout CheckMates
Learn Check Point
Advanced Learning
YOU DESERVE THE BEST SECURITY