Create a Post
Showing results for 
Search instead for 
Did you mean: 

SecureXL disabling during policy push causing application issues

We have an R77.30 Gateway that is under heavy load.  We have performed tuning, and have plan to address this load, but in the meantime, I have a question.

During a policy push, we see that SecureXL disables (normal and expected) from any where to 30-50 seconds.  Our application teams report latency and reduced connections during this time period.  We've been asked to try to eliminate this application "blip".  It seems that R80.20 would help, as policy push with 80.20 no longer disabled SecureXL.  However, we are worried that we would be shifting the problem somewhere else in the policy install chain.  Does anyone have experience with improved policy install times, and less application impact when moving from 77.30 to 80.20?


0 Kudos
2 Replies

There are two factors to the policy installation process: 

  • Policy verification and compilation: This covers the process of validating there are no rules hidden by another (rule X hides rule Y for service Z) and compiling the policy for installation to the Security Gateway. This should be faster in R80.x.
  • Policy push to gateway: Aside from the SecureXL-related changes in R80.20, this process is largely unchanged from past versions (i.e. it's still pushing the full policy, not a delta).

In general, the target is for policy installs to take no more than 2 minutes.
If your policy installs in R80.x are taking significantly longer than that, a TAC case should be opened. 

0 Kudos

To expand on what Dameon said, the cause of latency/loss during policy installation could be caused by the need to restart SecureXL in R80.10 and earlier.  However based on my experience it is much more likely that the connection rematch operation on the gateway is the root cause of what you are seeing.  On your gateway/cluster object on the Connection Persistence screen under Advanced, change the setting from "rematch connections" to "keep all connections".  Then push policy twice, do you see a big reduction in latency/loss with the second policy push?

If that doesn't help, the next place to look (especially if there is lots of packet loss) is for RX-DRPs racking up during a policy push via the netstat -ni command.  If you are piling up a lot of these during a policy push, this is one of the very limited situations where increasing the size of the network interface ring buffers might be appropriate.  But I would strongly advise doing some performance tuning of the gateway first, as increasing ring buffer sizes is typically a last resort and can cause other issues.


"Max Capture: Know Your Packets" Self-Guided Video Series
available at