Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Thomas_Eichelbu
Advisor
Advisor

After installing Take 58 on R81, FW Policy is lost. Firewall becomes unavailable ...

Hello Check Mates, 

we have seen several occasions that the firewall policy got totaly lost after installing Take 58.
the "Default Filter Policy" was loaded. 
we mostly deploy updates via CDT.

we went from Take 44 to Take 58, installed the hotfix on 13 gateways and all 13 got totaly stuck.
a CP Case in ongoing.

When we did "fw fetch" on the CLI it works instantly ...
But when we do a reboot of the firewall the issue happens again.

has anybody seen this before?

best regards

0 Kudos
8 Replies
PhoneBoy
Admin
Admin

Yeah, that does not sound like expected behavior at all, since the gateway should:

  • Fetch from management the last installed policy
  • Use the last policy the gateway has cached if management is unavailable

This is long established behavior and the fact it's doing neither is definitely a bug.

the_rock
Legend
Legend

Yea, I agree with @PhoneBoy . That certainly does not seem like normal/expected behaviour. Hopefully, TAC will involve R&D into the issue.

0 Kudos
Pawel_Szetela
Contributor

Hi Thomas,

In my case - R80.40, Cluster LS with 3 members, clean install, Jumbo 156.

2 gateways - defaultfilter after every reboot (fw fetch works), one is working fine.

Any news from TAC?

Regards,

0 Kudos
Thomas_Eichelbu
Advisor
Advisor

Hi Pawel,

well no real news yet so far ... we still wait ...
at best for a remote session to simulate and replicate the issue ...
i will keep you posted when new infos become available ...

regards.

0 Kudos
Naama_Specktor
Employee
Employee

Hi 🙂

@Thomas_Eichelbu 

 

my name is Naama Specktor and I am from checkpoint ,

I will appreciate it if you will share the TAC SR # with me , here or in PM.

 

thank you,

Naama 

0 Kudos
Ruan_Kotze
Advisor

Same issue on one site (cluster of 2 x 6400's) we upgraded to Take 65.

Going in via LOM and running "fw stat" shows defaultfilter as firewall policy, and "cphaprob stat" of course then shows HA module not started.

"fw fetch" was a temporary fix to get connectivity up and running, but it would revert to the defaultfilter policy again on reboot.  The permanent fix was to push policy from the SMS.

0 Kudos
Pawel_Szetela
Contributor

Hi,

In our case policy installation didn't fix problem. But we have manage to fix this another way.

Later checks showed that even fw fetch worked partially - AV and ABot were not working till policy install.

After many checks and tries with findings in support center without luck,  we started to analyze messages files. We compared files from "working" cluster member with faulty one and found some differences.

In faulty member we found many of this:

Apr 27 18:27:00 2022 CPFWX kernel: [fw4_13];[ERROR]: domo_ip_to_domain_lookup: domo global is NULL
Apr 27 18:27:00 2022 CPFWX kernel: [fw4_13];[ERROR]: nrb_column_ip_match_domains_for_ip: domo_ip_to_domain_lookup failed
Apr 27 18:27:00 2022 CPFWX kernel: [fw4_13];[ERROR]: nrb_column_ip_match: nrb_column_ip_match_domains_for_ip failed
Apr 27 18:27:00 2022 CPFWX kernel: [fw4_13];[ERROR]: nrb_rulebase_default_match: virtual match_func failed for column 'Destination IP' (2)
Apr 27 18:27:00 2022 CPFWX kernel: FW-1: lost 3252 debug messages
Apr 27 18:27:00 2022 CPFWX kernel: [fw4_14];[ERROR]: domo_ip_to_domain_lookup: domo global is NULL
Apr 27 18:27:00 2022 CPFWX kernel: [fw4_14];[ERROR]: nrb_column_ip_match_domains_for_ip: domo_ip_to_domain_lookup failed
Apr 27 18:27:00 2022 CPFWX kernel: [fw4_14];[ERROR]: nrb_column_ip_match: nrb_column_ip_match_domains_for_ip failed
Apr 27 18:27:00 2022 CPFWX kernel: [fw4_14];[ERROR]: nrb_rulebase_default_match: virtual match_func failed for column 'Destination IP' (2)
Apr 27 18:27:00 2022 CPFWX kernel: FW-1: lost 2904 debug messages
Apr 27 18:27:00 2022 CPFWX kernel: [fw4_15];[ERROR]: domo_ip_to_domain_lookup: domo global is NULL
Apr 27 18:27:00 2022 CPFWX kernel: [fw4_15];[ERROR]: nrb_column_ip_match_domains_for_ip: domo_ip_to_domain_lookup failed
Apr 27 18:27:00 2022 CPFWX kernel: [fw4_15];[ERROR]: nrb_column_ip_match: nrb_column_ip_match_domains_for_ip failed
Apr 27 18:27:00 2022 CPFWX kernel: [fw4_15];[ERROR]: nrb_rulebase_default_match: virtual match_func failed for column 'Destination IP' (2)
Apr 27 18:27:00 2022 CPFWX kernel: FW-1: lost 3392 debug messages
Apr 27 18:27:00 2022 CPFWX kernel: [fw4_16];[ERROR]: domo_ip_to_domain_lookup: domo global is NULL
Apr 27 18:27:00 2022 CPFWX kernel: [fw4_16];[ERROR]: nrb_column_ip_match_domains_for_ip: domo_ip_to_domain_lookup failed
Apr 27 18:27:00 2022 CPFWX kernel: [fw4_16];[ERROR]: nrb_column_ip_match: nrb_column_ip_match_domains_for_ip failed
Apr 27 18:27:00 2022 CPFWX kernel: [fw4_16];[ERROR]: nrb_rulebase_default_match: virtual match_func failed for column 'Destination IP' (2)
Apr 27 18:27:00 2022 CPFWX kernel: FW-1: lost 2716 debug messages
Apr 27 18:27:00 2022 CPFWX kernel: [fw4_17];[ERROR]: domo_ip_to_domain_lookup: domo global is NULL
Apr 27 18:27:00 2022 CPFWX kernel: [fw4_17];[ERROR]: nrb_column_ip_match_domains_for_ip: domo_ip_to_domain_lookup failed
Apr 27 18:27:00 2022 CPFWX kernel: [fw4_17];[ERROR]: nrb_column_ip_match: nrb_column_ip_match_domains_for_ip failed
Apr 27 18:27:00 2022 CPFWX kernel: [fw4_17];[ERROR]: nrb_rulebase_default_match: virtual match_func failed for column 'Destination IP' (2)
Apr 27 18:27:00 2022 CPFWX kernel: FW-1: lost 2424 debug messages
Apr 27 18:27:00 2022 CPFWX kernel: [fw4_18];[ERROR]: domo_ip_to_domain_lookup: domo global is NULL
Apr 27 18:27:00 2022 CPFWX kernel: [fw4_18];[ERROR]: nrb_column_ip_match_domains_for_ip: domo_ip_to_domain_lookup failed
Apr 27 18:27:00 2022 CPFWX kernel: [fw4_18];[ERROR]: nrb_column_ip_match: nrb_column_ip_match_domains_for_ip failed
Apr 27 18:27:00 2022 CPFWX kernel: [fw4_18];[ERROR]: nrb_rulebase_default_match: virtual match_func failed for column 'Destination IP' (2)

Later, after reboot (before runinng fw fetch) we analyzed install_policy_report.txt. First error in this file:

cmi_loader: 'signatures_done_cb' failed for app: (FILE_SECURITY), app_id (12)

lead us to sk173248 and gave us a clue - maybe IOC problem, maybe MD5.

In our policy we use IOC configured in SmartConsole and form IOC feeds. Removing IOC feed for MD5 resolved problem. So we decided to remove all IOC from SmartConsole and move them to IOC feed.

And that solved our problem. Strange thing is that one cluster member worked fine ... but that's another mistery of Check Point 🙂

Regards,

0 Kudos
Thomas_Eichelbu
Advisor
Advisor

Hello Guys, 

on this customer with the corrupt Take 58 installations we have and still have Indicator files loaded with MD5 hashes ...
we saw that after a policy push from the SMS the polic was loaded successfully.. the policy then was not lost after subsequent reboots.

other costumers which do not have Indicator files with MD5 hashes loaded have no issue at all with Take 58.

i hope we find time to invest the by ourself a bit more ... and reproduce the issue in a lab enviroment.
Check Point TAC is also still investigating ... 

0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events