- Products
- Learn
- Local User Groups
- Partners
- More
Policy Insights and Policy Auditor in Action
19 November @ 5pm CET / 11am ET
Access Control and Threat Prevention Best Practices
Watch HereOverlap in Security Validation
Help us to understand your needs better
CheckMates Go:
Maestro Madness
Hi all,
We are facing a peculiar issue with our R80.20 cluster.
Hardware: 5900 appliance
OS/Version: GAIA R80.20
Blades Enabled: Firewall, IPS and Anti-bot.
Every week at least once one of the cluster members freezes, always standby member and only comes up after a reboot.
When we check the health using CPview history during the time of the issue say CPU, RAM, Connections, Hmem, Smem, Kmem failed allocation, all seems fine and in fact, the CPU is hardly 10% utilized, RAM 10%, Connections less than 10,000.
Currently, R&D is involved and working on this. Based on their analysis we have disabled priority queue, drop optimization but no luck.
Would be helpful if you can bring in your expertise to narrow down the issue while R&D continues its investigation.
- Is jumbo hotfix 47 installed?
- Any error in /var/log/messages
- If you have only 10000 connections disable SecureXL and check it again.
FYI:
SecureXL has been significantly revised in R80.20. It now works in user space. This has also led to some changes in "fw monitor", The SecureXL driver takes a certain amount of kernel memory per core and that was adding up to more kernel memory than Intel/Linux was allowing.
More infos here:
R80.x Security Gateway Architecture (Logical Packet Flow)
I think the R&D is the rigth way.
Please find the answers/comments in line
- Is jumbo hotfix 47 installed?
Currently Jumbo take 33 installed and take 47 have no resolved issue with respect to freezing.
- Any error in /var/log/messages
During the time of the freeze till reboot, no relevant information found in var/log/message. In fact, no information at all during the time of the freeze.
- If you have only 10000 connections disable SecureXL and check it again.
The freeze is most often observed early in the morning, so in the day time, the traffic surpasses more than 56000 so securexl cannot be disabled.
But if securexl is to cause an issue it should be on the active member but why is the standby member which is not handling any traffic is getting affected affected
Does the console on the standby still respond during the "freeze"? Or do you have to pull the power cord to recover?
When you say "always standby member" do you mean that the issue always occurs on whatever member happens to be standby, and it has happened on both pieces of hardware? Or does it always happen on the same piece of hardware that is standby? If the latter check the hardware sensor data, I believe you can look at historical sensor data right from cpview in R80.20 and later.
What do the commands cphaprob stat, cphaprob -a if and cphaprob -l list display while the standby member is in its afflicted state? Does ClusterXL still report everything is OK or does it report a failure? What I would try to do in this case is determine if it is ClusterXL itself misbehaving, or the underlying firewall infrastructure that is experiencing a problem and ClusterXL is just reporting it. Based on the troubleshooting steps so far it sounds like TAC suspects something in the underlying firewall code. I assume TAC has already looked in /var/log/messages* for any smoking guns?
Is the standby member experiencing issues with the Sync interface specifically? If so see these threads:
https://community.checkpoint.com/t5/General-Topics/Issue-on-the-sync-interface/m-p/30640
https://www.cpug.org/forums/showthread.php/22679-HA-Failover-appears-to-be-caused-by-sync-interface
Hi, Timothy
We have faced this issue on both piece of hardware. i.e Any cluster member which happens to be in standby mode freezes.
In CPview history we are unable to see the hardware sensor readings like CPU temperature fan speed etc.
The clusterxl commands reports issue.
Command outputs
#cphaprob stat
Member1 - Active Attention
Member 2 - Lost.
#cphaprob -a
Out of the 15 interfaces, we see 3 interfaces in downstate which include Sync interface also. The same 3 interfaces show down during every freeze incident.
#cphaprob -l list
All ok on Member 1
Member 2 not accessible.
We see no write up in the var/log/messages from the time of freeze till the box is rebooted.
Is the standby member experiencing issues with the Sync interface specifically?
Nothing related to Sync issue but we could see some RX buffer overrun on Sync interface. Since the Sync between the cluster member was connected back to back, we changed this by connecting them through the switch and hardcoding the interface at both firewall and switch end with full duplex, we did not get any buffer overrun readings thereafter.
Does the console on the standby still respond during the "freeze"? Or do you have to pull the power cord to recover?
The console on the standby doesn't respond during freeze unless we boot it into online debug mode(kdb mode). We hard reboot when the freeze occurs.
So it sounds like you are experiencing a hard hang on the standby. In cpview history mode leading up to the incident does free memory slowly decrease? Just wondering if the kernel has somehow managed to exhaust all free memory which would cause all user-space processes to hang/die (including getty for the console).
In hang situations such as these, making an attempt to determine whether the hang is occurring in Gaia/Linux driver or Check Point's custom kernel code can be very helpful. Let's start with Gaia/Linux:
Are you using the new 3.10 kernel? (uname -a from expert mode) My guess is yes and there are significantly newer NIC drivers in use by that new kernel.
Another hang cause can be getting stuck inside a hardware interrupt which can be caused by hardware or a driver. Since handling NIC traffic is by far the most common hardware interrupt operation on a firewall it is logical to look there. I'd suggest trying to simplify what the NICs and their Gaia/Linux drivers are trying to do on both firewalls and see if if impacts the problem by disabling:
1) Hyperthreading (adjust back to 6 instances for a 2/6 split via cpconfig)
2) Disable Multi-Queue if enabled
3) If they have been modified, set interface ring buffer sizes back to their default
If the hang is occurring in Check Point code, it will be a lot tougher to find. Might be interesting to run ips off and fw amw unload on just the standby and see if the problem stops happening (you'll need to run these again if you reinstall policy to the cluster). Obviously if a regular failover to the standby occurs the IPS and AntiBot blades will not be protecting your traffic there, so take that into consideration. Also try the following simplifications from the Check Point code side:
1) Disable monitoring & QoS blades on gateway if enabled, these features load up extra kernel drivers on the gateway
2) Disable SecureXL - Note that SecureXL cannot really be permanently disabled in R80.20 and later
3) Look at the output of the enabled_blades command, anything else you can disable?
Hi guys,
I think if a 5900 with 10000 connections freeze, then something is seriously wrong.
We have several customers who use a 5900 appliance with R80.20 JHF47. This error does not occur there.
Here the R&D should take a closer look at the appliance.
Regards
Heiko
The 5900 appliance should use a 2.6 kernel so the 3.10 kernel and driver problem is not relevant here. But I agree with you, open server with 3.10 kernel have some problems with enabled SecureXL and network drivers. We've also opened some cases here:-(
Regards
Heiko
Leaderboard
Epsum factorial non deposit quid pro quo hic escorol.
| User | Count |
|---|---|
| 27 | |
| 23 | |
| 15 | |
| 14 | |
| 12 | |
| 10 | |
| 6 | |
| 6 | |
| 5 | |
| 4 |
Wed 19 Nov 2025 @ 11:00 AM (EST)
TechTalk: Improve Your Security Posture with Threat Prevention and Policy InsightsThu 20 Nov 2025 @ 05:00 PM (CET)
Hacking LLM Applications: latest research and insights from our LLM pen testing projects - AMERThu 20 Nov 2025 @ 10:00 AM (CST)
Hacking LLM Applications: latest research and insights from our LLM pen testing projects - EMEAWed 26 Nov 2025 @ 12:00 PM (COT)
Panama City: Risk Management a la Parrilla: ERM, TEM & Meat LunchWed 19 Nov 2025 @ 11:00 AM (EST)
TechTalk: Improve Your Security Posture with Threat Prevention and Policy InsightsThu 20 Nov 2025 @ 05:00 PM (CET)
Hacking LLM Applications: latest research and insights from our LLM pen testing projects - AMERThu 20 Nov 2025 @ 10:00 AM (CST)
Hacking LLM Applications: latest research and insights from our LLM pen testing projects - EMEAThu 04 Dec 2025 @ 12:30 PM (SGT)
End-of-Year Event: Securing AI Transformation in a Hyperconnected World - APACThu 04 Dec 2025 @ 03:00 PM (CET)
End-of-Year Event: Securing AI Transformation in a Hyperconnected World - EMEAWed 26 Nov 2025 @ 12:00 PM (COT)
Panama City: Risk Management a la Parrilla: ERM, TEM & Meat LunchAbout CheckMates
Learn Check Point
Advanced Learning
YOU DESERVE THE BEST SECURITY