- CheckMates
- :
- Products
- :
- Quantum
- :
- Security Gateways
- :
- Re: R81.20 VSX, CoreXL allocation, Dynamic Balanci...
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Are you a member of CheckMates?
×- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
R81.20 VSX, CoreXL allocation, Dynamic Balancing
Hello,
I do not find a real answer to this question.
We have a VSX Cluster with 3 Virtual Systems.
The CoreXL allocation is:
VS1 : 14
VS2: 4
VS3: 2
We migrated to 19200 appliances now and have 80 "cores". Is it needed to change the allocation in SmartConsole or will Dynamic Balancing will handle it anyway?
Regards,
Jan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I imagine if you run HCP on this machine it would complain that the VS / CoreXL setup is sub optimal for performance.
This thread should help with your understanding where @AmitShmuel contributed on this topic:
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Great reference Chris
@Jan do not confuse number of CoreXL instance/fw worker (editable smartconsole) with number of cores available for them:
"On VSX, Dynamic Balancing only changes the amount of cores running FW workers, so you can configure any number of them.
Upon SND addition, it will set the FWKs of all VSs to the new set of cores."
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
To expand on that.. on a VSX system you have SND cores and CoreXL cores. The SND cores are there for packet dispatching and SecureXL processing, the CoreXL cores are a 'pool' of cores that the VS FWK process threads can address. The amount of FWK process threads is managed via your CoreXL per VS configuration. These are user mode processes that queue up for processor cycles, which are allocated by the OS from the CoreXL pool of cores when required. Hence they will move about that pool constantly, they are not statically affined to any particular CPU core. This is also why it's totally fine to 'oversubscribe' the amount of VS threads to CPU cores on a box, to a point (HCP will complain when you have more than double VS threads than cores available, so that's a reasonable guide).
Dynamic Balancing will adjust the split between SND cores and CoreXL pool cores on the fly, based on the load. IT will NOT dynamically adjust your CXL/VS configuration. We don't want the overhead of having to dynamically balance so many things, as there may be 50 VSs on a box and that's just too much to balance. So CXL/VS is static, and your VSs should have sufficient CXL/VS to do their job without the risk of resource exhaustion, either through not enough FWK threads or not enough cores in the machine. You should never have so many CXL/VS on one VS that it can occupy all the CoreXL pool cores and starve the other VSs of resources.
So yes, you should allocate more CXL/VS to your VSs.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Also for reference this is an (sanitized/redacted) example of the HCP test Emma and I referred to above:
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you all for your explanation. So I will need to "guess" how to allocate CoreXL instances . Or is there anywhere a cheatsheet to calculate instances by concurrent connections? Also does changing this setting invoke a full downtime of the VS even in a cluster ?
Jan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
It's more based on anticipated load which you could derive from connections but will also depend on the enabled blades etc.
Currently in R81.20 you can assign up to 32 corexl instances to a particular Virtual System.
Starting back in R80.20 changes in the number of FW worker instances (FWK) in a VSLS setup do not require downtime.
