- Products
- Learn
- Local User Groups
- Partners
- More
CheckMates Fifth Birthday
Celebrate with Us!
days
hours
minutes
seconds
Join the CHECKMATES Everywhere Competition
Submit your picture to win!
Check Point Proactive support
Free trial available for 90 Days!
As YOU DESERVE THE BEST SECURITY
Upgrade to our latest GA Jumbo
The 2022 MITRE Engenuity ATT&CK®
Evaluations Results Are In!
Now Available: SmartAwareness Security Training
Training Built to Educate and Engage
MITRE ATT&CK
Inside Check Point products!
CheckFlix!
All Videos In One Space
We have an R77.30 Gateway that is under heavy load. We have performed tuning, and have plan to address this load, but in the meantime, I have a question.
During a policy push, we see that SecureXL disables (normal and expected) from any where to 30-50 seconds. Our application teams report latency and reduced connections during this time period. We've been asked to try to eliminate this application "blip". It seems that R80.20 would help, as policy push with 80.20 no longer disabled SecureXL. However, we are worried that we would be shifting the problem somewhere else in the policy install chain. Does anyone have experience with improved policy install times, and less application impact when moving from 77.30 to 80.20?
There are two factors to the policy installation process:
In general, the target is for policy installs to take no more than 2 minutes.
If your policy installs in R80.x are taking significantly longer than that, a TAC case should be opened.
To expand on what Dameon said, the cause of latency/loss during policy installation could be caused by the need to restart SecureXL in R80.10 and earlier. However based on my experience it is much more likely that the connection rematch operation on the gateway is the root cause of what you are seeing. On your gateway/cluster object on the Connection Persistence screen under Advanced, change the setting from "rematch connections" to "keep all connections". Then push policy twice, do you see a big reduction in latency/loss with the second policy push?
If that doesn't help, the next place to look (especially if there is lots of packet loss) is for RX-DRPs racking up during a policy push via the netstat -ni command. If you are piling up a lot of these during a policy push, this is one of the very limited situations where increasing the size of the network interface ring buffers might be appropriate. But I would strongly advise doing some performance tuning of the gateway first, as increasing ring buffer sizes is typically a last resort and can cause other issues.
About CheckMates
Learn Check Point
Advanced Learning
YOU DESERVE THE BEST SECURITY