- Products
- Learn
- Local User Groups
- Partners
- More
Introduction to Lakera:
Securing the AI Frontier!
Quantum Spark Management Unleashed!
Check Point Named Leader
2025 Gartner® Magic Quadrant™ for Hybrid Mesh Firewall
HTTPS Inspection
Help us to understand your needs better
CheckMates Go:
SharePoint CVEs and More!
HI All,
I was told to start a new thread.
I asked a question int this thread 'fw ctl conntab -x' issue in R81.10 - Check Point CheckMates
My question was:
In the output of the command "fw ctl multik gconn -p", there are some connections that have been present for several days, even though they no longer appear in the current connection table.
No connections were removed manually.
Have you encountered this issue before?
Is there a way to remove these entries from the gconn table without a reboot?
Thanks for your answer, Timothy. I still have a question.
If there are no live connections in the table, then how are the two tables — Dynamic Dispatcher and connections — related to each other?
When is an entry added to the Dynamic Dispatcher table, and what determines how long it remains there?
The whole question arises because the following log appeared in the fwk.elg file:
"[ERROR]: up_manager_resume_chain: fwhold_send failed. chain will be dropped by the fwhold API;"
As far as I understand, fwhold refers to a mechanism where, if a connection requires further inspection, the kernel places it into a "hold" state — meaning its forwarding is temporarily suspended.
The Hold_ref field in the Dynamic Dispatcher table indicates which connections are currently in this "hold" state.
Problems can arise when the number of "held" connections grows too high, as this can lead to performance degradation and packet drops. This is indicated by the aforementioned error log entry involving the fwhold API.
That’s why I’m asking: if the same connections remain in the Dynamic Dispatcher table for several days, and each of them has a Hold_ref value greater than 1, is that a sign of a problem?
And more importantly: when and how do these entries get cleared from the Dynamic Dispatcher table?
BR,
Zolo
Maybe not a bad idea to get TAC involved as well for this.
Andy
They have already been working on the issue.
I’m just curious, and it’s always great to learn from the experts.😁
What are the network issues you are facing here ? Are too many hold connections causing a problem ?
In the Dynamic Dispatcher table, entries for unused fw_workers can be seen for as long as they are unused 😉
Almost every time a policy is installed, there is a network disruption, and the BGP connection is also lost. This happens because, during this process, the gateway does not send BFD packets, causing the peer router to drop the BGP session.
According to a TAC case, this error can occur if your policy has any non-FQDN Domain Objects in it.
Which means it may not be related to your Dynamic Dispatcher question.
Yes, this is true to some extent, because the whole issue started when several non-FQDN objects were added.
So, is the solution to avoid using non-FQDN objects?
On one hand, that can't be the solution, and on the other hand, it would be useful to understand the root cause of the problem.
Non-FQDN objects are very bad for performance and are called out in my Gateway Performance Optimization course. Don't use them: sk162577: Traffic latency through Security Gateway when Access Control Policy contains non-FQDN Doma...
One of the ways non-FDQN objects are resolved is via a Reverse DNS lookup (IP to name).
When this process doesn't result in a mapping, this message may appear.
The other way to resolve these objects is via Passive DNS: https://support.checkpoint.com/results/sk/sk161612
Interesting enough, if you're on recent JHF, and you have Passive DNS set up correctly, you might disable the reverse DNS mechanism (supported from R81.20 JHF 99).
PRJ-58814, |
Security Gateway |
UPDATE: Added a kernel parameter "domo_reverse_lookup_disabled" to disable reverse DNS lookups to avoid rare incorrect matches in scenarios involving non-Fully Qualified Domain Name (non-FQDN) Domains.
|
Yes, the behavior where entries in the Dynamic Dispatcher Table (fw ctl multik gconn -p) persist for several days even though they no longer appear in the connection table (fw ctl conntab) can indicate a problem, especially if the Hold_ref value is greater than 1 and doesn't change over time.
Meaning of the Hold_ref Field:
Hold_ref indicates that the connection is in a "hold" state managed by the fwhold mechanism. This is used when deeper inspection is needed (e.g., by IPS, Threat Prevention). A Hold_ref > 0 means the connection is not fully processed or released yet. If Hold_ref remains above 0 for an extended period and the entry stays in the table, it could indicate a problem with the fwhold logic or a stuck process.
Relationship Between the Connection Table and Dynamic Dispatcher Table:
- The Connection Table (conntab) contains active, live connections currently being handled by the firewall.
- The Dynamic Dispatcher Table (gconn) is used by CoreXL to associate connections with specific firewall workers.
- An entry in the gconn table is created when a connection is initiated and should be removed when:
- The connection is properly terminated, and the corresponding worker has released all references (e.g., via timeout, FIN, or RST).
These tables are related but not identical. It’s possible for a gconn entry to remain even after the related conntab entry is gone — but this should only be temporary.
What Does the Error Log Mean?
[ERROR]: up_manager_resume_chain: fwhold_send failed. chain will be dropped by the fwhold API;
This means:
- A connection in the hold state could not be resumed properly.
- The fwhold API discarded the processing chain because it failed to resume the connection.
- The connection is dropped but might remain as a zombie in the gconn table.
Yes. If such entries with Hold_ref > 1 remain in the table for days, it indicates (Missing cleanup, Possibly stuck fwhold contexts or Potential memory leaks or performance degradation as these entries accumulate)
Yes, if gconn entries with Hold_ref > 1 remain for days, it's a sign of a problem. These entries should be cleaned up automatically. If not, it likely points to a bug or a stuck inspection process.
Heiko, thank you for the detailed and thorough answer.
I always read your writings with great respect, as there’s so much to learn from them.
Leaderboard
Epsum factorial non deposit quid pro quo hic escorol.
User | Count |
---|---|
12 | |
12 | |
9 | |
7 | |
6 | |
6 | |
5 | |
5 | |
5 | |
5 |
Tue 30 Sep 2025 @ 08:00 AM (EDT)
Tips and Tricks 2025 #13: Strategic Cyber Assessments: How to Strengthen Your Security PostureTue 07 Oct 2025 @ 10:00 AM (CEST)
Cloud Architect Series: AI-Powered API Security with CloudGuard WAFTue 30 Sep 2025 @ 08:00 AM (EDT)
Tips and Tricks 2025 #13: Strategic Cyber Assessments: How to Strengthen Your Security PostureThu 09 Oct 2025 @ 10:00 AM (CEST)
CheckMates Live BeLux: Discover How to Stop Data Leaks in GenAI Tools: Live Demo You Can’t Miss!Wed 22 Oct 2025 @ 11:00 AM (EDT)
Firewall Uptime, Reimagined: How AIOps Simplifies Operations and Prevents OutagesAbout CheckMates
Learn Check Point
Advanced Learning
YOU DESERVE THE BEST SECURITY