- Products
- Learn
- Local User Groups
- Partners
- More
Introduction to Lakera:
Securing the AI Frontier!
Quantum Spark Management Unleashed!
Check Point Named Leader
2025 Gartner® Magic Quadrant™ for Hybrid Mesh Firewall
HTTPS Inspection
Help us to understand your needs better
CheckMates Go:
SharePoint CVEs and More!
Hello Check Mates,
we have seen several occasions that the firewall policy got totaly lost after installing Take 58.
the "Default Filter Policy" was loaded.
we mostly deploy updates via CDT.
we went from Take 44 to Take 58, installed the hotfix on 13 gateways and all 13 got totaly stuck.
a CP Case in ongoing.
When we did "fw fetch" on the CLI it works instantly ...
But when we do a reboot of the firewall the issue happens again.
has anybody seen this before?
best regards
Yeah, that does not sound like expected behavior at all, since the gateway should:
This is long established behavior and the fact it's doing neither is definitely a bug.
Yea, I agree with @PhoneBoy . That certainly does not seem like normal/expected behaviour. Hopefully, TAC will involve R&D into the issue.
Hi Thomas,
In my case - R80.40, Cluster LS with 3 members, clean install, Jumbo 156.
2 gateways - defaultfilter after every reboot (fw fetch works), one is working fine.
Any news from TAC?
Regards,
Hi Pawel,
well no real news yet so far ... we still wait ...
at best for a remote session to simulate and replicate the issue ...
i will keep you posted when new infos become available ...
regards.
Hi 🙂
my name is Naama Specktor and I am from checkpoint ,
I will appreciate it if you will share the TAC SR # with me , here or in PM.
thank you,
Naama
Same issue on one site (cluster of 2 x 6400's) we upgraded to Take 65.
Going in via LOM and running "fw stat" shows defaultfilter as firewall policy, and "cphaprob stat" of course then shows HA module not started.
"fw fetch" was a temporary fix to get connectivity up and running, but it would revert to the defaultfilter policy again on reboot. The permanent fix was to push policy from the SMS.
Hi,
In our case policy installation didn't fix problem. But we have manage to fix this another way.
Later checks showed that even fw fetch worked partially - AV and ABot were not working till policy install.
After many checks and tries with findings in support center without luck, we started to analyze messages files. We compared files from "working" cluster member with faulty one and found some differences.
In faulty member we found many of this:
Apr 27 18:27:00 2022 CPFWX kernel: [fw4_13];[ERROR]: domo_ip_to_domain_lookup: domo global is NULL
Apr 27 18:27:00 2022 CPFWX kernel: [fw4_13];[ERROR]: nrb_column_ip_match_domains_for_ip: domo_ip_to_domain_lookup failed
Apr 27 18:27:00 2022 CPFWX kernel: [fw4_13];[ERROR]: nrb_column_ip_match: nrb_column_ip_match_domains_for_ip failed
Apr 27 18:27:00 2022 CPFWX kernel: [fw4_13];[ERROR]: nrb_rulebase_default_match: virtual match_func failed for column 'Destination IP' (2)
Apr 27 18:27:00 2022 CPFWX kernel: FW-1: lost 3252 debug messages
Apr 27 18:27:00 2022 CPFWX kernel: [fw4_14];[ERROR]: domo_ip_to_domain_lookup: domo global is NULL
Apr 27 18:27:00 2022 CPFWX kernel: [fw4_14];[ERROR]: nrb_column_ip_match_domains_for_ip: domo_ip_to_domain_lookup failed
Apr 27 18:27:00 2022 CPFWX kernel: [fw4_14];[ERROR]: nrb_column_ip_match: nrb_column_ip_match_domains_for_ip failed
Apr 27 18:27:00 2022 CPFWX kernel: [fw4_14];[ERROR]: nrb_rulebase_default_match: virtual match_func failed for column 'Destination IP' (2)
Apr 27 18:27:00 2022 CPFWX kernel: FW-1: lost 2904 debug messages
Apr 27 18:27:00 2022 CPFWX kernel: [fw4_15];[ERROR]: domo_ip_to_domain_lookup: domo global is NULL
Apr 27 18:27:00 2022 CPFWX kernel: [fw4_15];[ERROR]: nrb_column_ip_match_domains_for_ip: domo_ip_to_domain_lookup failed
Apr 27 18:27:00 2022 CPFWX kernel: [fw4_15];[ERROR]: nrb_column_ip_match: nrb_column_ip_match_domains_for_ip failed
Apr 27 18:27:00 2022 CPFWX kernel: [fw4_15];[ERROR]: nrb_rulebase_default_match: virtual match_func failed for column 'Destination IP' (2)
Apr 27 18:27:00 2022 CPFWX kernel: FW-1: lost 3392 debug messages
Apr 27 18:27:00 2022 CPFWX kernel: [fw4_16];[ERROR]: domo_ip_to_domain_lookup: domo global is NULL
Apr 27 18:27:00 2022 CPFWX kernel: [fw4_16];[ERROR]: nrb_column_ip_match_domains_for_ip: domo_ip_to_domain_lookup failed
Apr 27 18:27:00 2022 CPFWX kernel: [fw4_16];[ERROR]: nrb_column_ip_match: nrb_column_ip_match_domains_for_ip failed
Apr 27 18:27:00 2022 CPFWX kernel: [fw4_16];[ERROR]: nrb_rulebase_default_match: virtual match_func failed for column 'Destination IP' (2)
Apr 27 18:27:00 2022 CPFWX kernel: FW-1: lost 2716 debug messages
Apr 27 18:27:00 2022 CPFWX kernel: [fw4_17];[ERROR]: domo_ip_to_domain_lookup: domo global is NULL
Apr 27 18:27:00 2022 CPFWX kernel: [fw4_17];[ERROR]: nrb_column_ip_match_domains_for_ip: domo_ip_to_domain_lookup failed
Apr 27 18:27:00 2022 CPFWX kernel: [fw4_17];[ERROR]: nrb_column_ip_match: nrb_column_ip_match_domains_for_ip failed
Apr 27 18:27:00 2022 CPFWX kernel: [fw4_17];[ERROR]: nrb_rulebase_default_match: virtual match_func failed for column 'Destination IP' (2)
Apr 27 18:27:00 2022 CPFWX kernel: FW-1: lost 2424 debug messages
Apr 27 18:27:00 2022 CPFWX kernel: [fw4_18];[ERROR]: domo_ip_to_domain_lookup: domo global is NULL
Apr 27 18:27:00 2022 CPFWX kernel: [fw4_18];[ERROR]: nrb_column_ip_match_domains_for_ip: domo_ip_to_domain_lookup failed
Apr 27 18:27:00 2022 CPFWX kernel: [fw4_18];[ERROR]: nrb_column_ip_match: nrb_column_ip_match_domains_for_ip failed
Apr 27 18:27:00 2022 CPFWX kernel: [fw4_18];[ERROR]: nrb_rulebase_default_match: virtual match_func failed for column 'Destination IP' (2)
Later, after reboot (before runinng fw fetch) we analyzed install_policy_report.txt. First error in this file:
cmi_loader: 'signatures_done_cb' failed for app: (FILE_SECURITY), app_id (12)
lead us to sk173248 and gave us a clue - maybe IOC problem, maybe MD5.
In our policy we use IOC configured in SmartConsole and form IOC feeds. Removing IOC feed for MD5 resolved problem. So we decided to remove all IOC from SmartConsole and move them to IOC feed.
And that solved our problem. Strange thing is that one cluster member worked fine ... but that's another mistery of Check Point 🙂
Regards,
Hello Guys,
on this customer with the corrupt Take 58 installations we have and still have Indicator files loaded with MD5 hashes ...
we saw that after a policy push from the SMS the polic was loaded successfully.. the policy then was not lost after subsequent reboots.
other costumers which do not have Indicator files with MD5 hashes loaded have no issue at all with Take 58.
i hope we find time to invest the by ourself a bit more ... and reproduce the issue in a lab enviroment.
Check Point TAC is also still investigating ...
Leaderboard
Epsum factorial non deposit quid pro quo hic escorol.
User | Count |
---|---|
15 | |
12 | |
8 | |
6 | |
6 | |
6 | |
5 | |
5 | |
4 | |
3 |
Tue 30 Sep 2025 @ 08:00 AM (EDT)
Tips and Tricks 2025 #13: Strategic Cyber Assessments: How to Strengthen Your Security PostureTue 07 Oct 2025 @ 10:00 AM (CEST)
Cloud Architect Series: AI-Powered API Security with CloudGuard WAFTue 30 Sep 2025 @ 08:00 AM (EDT)
Tips and Tricks 2025 #13: Strategic Cyber Assessments: How to Strengthen Your Security PostureThu 09 Oct 2025 @ 10:00 AM (CEST)
CheckMates Live BeLux: Discover How to Stop Data Leaks in GenAI Tools: Live Demo You Can’t Miss!Wed 22 Oct 2025 @ 11:00 AM (EDT)
Firewall Uptime, Reimagined: How AIOps Simplifies Operations and Prevents OutagesAbout CheckMates
Learn Check Point
Advanced Learning
YOU DESERVE THE BEST SECURITY