- CheckMates
- :
- Products
- :
- Quantum
- :
- Maestro Masters
- :
- Re: Maestro VSX. Need to assign more cores to SND ...
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Maestro VSX. Need to assign more cores to SND on 7000 appliances based security group.
Hi, we are preparing to roll out 7000 based Maestro VSX security group with two Virtual Systems. We have received performance test report from Chekpoint wchich states that with our planned load on this SG we should change count of SND cores from default 4 to 8. What would be proper 8 SND core allocation on 7000 platform, taking hyperthreading core numbers into account?
I assume that in line with default settings which use 0,16,1,17 cores for SND, we should add 2,18,3,19?
What would be proper commands to remove these cores from FW worker allocation and add them to SND (on R81.10 scalable platform)?
Current affinity and mq_mng status:
g_fw ctl affinity -l
-*- 1 blade: 1_01 -*-
VS_0: CPU 2 3 4 5 6 7 8 9 10 11 12 13 14 15 18 19 20 21 22 23 24 25 26 27 28 29
VS_0 fwk: CPU 2 3 4 5 6 7 8 9 10 11 12 13 14 15 18 19 20 21 22 23 24 25 26 27 28 29
VS_1: CPU 2 3 4 5 6 7 8 9 10 11 12 13 14 15 18 19 20 21 22 23 24 25 26 27 28 29
VS_1 fwk: CPU 2 3 4 5 6 7 8 9 10 11 12 13 14 15 18 19 20 21 22 23 24 25 26 27 28 29
VS_1 fwd: CPU 31
VS_2: CPU 2 3 4 5 6 7 8 9 10 11 12 13 14 15 18 19 20 21 22 23 24 25 26 27 28 29
VS_2 fwk: CPU 2 3 4 5 6 7 8 9 10 11 12 13 14 15 18 19 20 21 22 23 24 25 26 27 28 29
VS_2 fwd: CPU 30
Interface ethsBP1-01: has multi queue enabled
Interface ethsBP1: has multi queue enabled
Interface ethsBP5: has multi queue enabled
Interface ethsBP2: has multi queue enabled
Interface ethsBP1-02: has multi queue enabled
Interface ethsBP6: has multi queue enabled
Interface ethsBP3: has multi queue enabled
Interface ethsBP7: has multi queue enabled
Interface ethsBP4: has multi queue enabled
Interface ethsBP1-03: has multi queue enabled
Interface ethsBP8: has multi queue enabled
Interface ethsBP1-04: has multi queue enabled
-*- 1 blade: 2_01 -*-
VS_0: CPU 2 3 4 5 6 7 8 9 10 11 12 13 14 15 18 19 20 21 22 23 24 25 26 27 28 29
VS_0 fwk: CPU 2 3 4 5 6 7 8 9 10 11 12 13 14 15 18 19 20 21 22 23 24 25 26 27 28 29
VS_1: CPU 2 3 4 5 6 7 8 9 10 11 12 13 14 15 18 19 20 21 22 23 24 25 26 27 28 29
VS_1 fwk: CPU 2 3 4 5 6 7 8 9 10 11 12 13 14 15 18 19 20 21 22 23 24 25 26 27 28 29
VS_1 fwd: CPU 31
VS_2: CPU 2 3 4 5 6 7 8 9 10 11 12 13 14 15 18 19 20 21 22 23 24 25 26 27 28 29
VS_2 fwk: CPU 2 3 4 5 6 7 8 9 10 11 12 13 14 15 18 19 20 21 22 23 24 25 26 27 28 29
VS_2 fwd: CPU 30
Interface ethsBP1: has multi queue enabled
Interface ethsBP5: has multi queue enabled
Interface ethsBP2: has multi queue enabled
Interface ethsBP1-01: has multi queue enabled
Interface ethsBP6: has multi queue enabled
Interface ethsBP3: has multi queue enabled
Interface ethsBP7: has multi queue enabled
Interface ethsBP4: has multi queue enabled
Interface ethsBP1-02: has multi queue enabled
Interface ethsBP8: has multi queue enabled
Interface ethsBP1-03: has multi queue enabled
Interface ethsBP1-04: has multi queue enabled
g_all mq_mng --show
1_01:
Total 32 cores. Available for MQ 4 cores
i/f type state mode (queues) cores
actual/avail
------------------------------------------------------------------------------------------------
ethsBP1 igb Up Auto (4/4) 0,16,1,17
ethsBP1-01 i40e Up Auto (4/4) 0,16,1,17
ethsBP1-02 i40e Up Auto (4/4) 0,16,1,17
ethsBP1-03 i40e Up Auto (4/4) 0,16,1,17
ethsBP1-04 i40e Up Auto (4/4) 0,16,1,17
ethsBP2 igb Up Auto (4/4) 0,16,1,17
ethsBP3 igb Up Auto (4/4) 0,16,1,17
ethsBP4 igb Up Auto (4/4) 0,16,1,17
ethsBP5 igb Up Auto (4/4) 0,16,1,17
ethsBP6 igb Up Auto (4/4) 0,16,1,17
ethsBP7 igb Up Auto (4/4) 0,16,1,17
ethsBP8 igb Up Auto (4/4) 0,16,1,17
2_01:
Total 32 cores. Available for MQ 4 cores
i/f type state mode (queues) cores
actual/avail
------------------------------------------------------------------------------------------------
ethsBP1 igb Up Auto (4/4) 0,16,1,17
ethsBP1-01 i40e Up Auto (4/4) 0,16,1,17
ethsBP1-02 i40e Up Auto (4/4) 0,16,1,17
ethsBP1-03 i40e Up Auto (4/4) 0,16,1,17
ethsBP1-04 i40e Up Auto (4/4) 0,16,1,17
ethsBP2 igb Up Auto (4/4) 0,16,1,17
ethsBP3 igb Up Auto (4/4) 0,16,1,17
ethsBP4 igb Up Auto (4/4) 0,16,1,17
ethsBP5 igb Up Auto (4/4) 0,16,1,17
ethsBP6 igb Up Auto (4/4) 0,16,1,17
ethsBP7 igb Up Auto (4/4) 0,16,1,17
ethsBP8 igb Up Auto (4/4) 0,16,1,17
Thanks,
Wojtek
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Finally got word that we need to use the following:
g_fw ctl affinity -s -d fwkall #
Where # is the number of core that you want allocated to the FW workers, so if you have 32 cores available and you want 8 core SND the number should be 24.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@woytekm if you get the advice by Check Point to change your SND/worker configuration they should be able to show you the commands doing this.
But your question will be an interesting problem. I started a discussion dynamic balancing, VSX and core affinity to get an answer for the newer versions with the dynamic performance features enabled. I think with R81.10 and up this should be the new way, more dynamic then static assignment. But there are no information available from the field with this. Check Point should answer this.
Static core assignments limits a lot, especially in case of Maestro. But a lot of the VSX guys from here knows her stable running systems with static assignment.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Wolfgang, thanks for the reply, we have finally confirmed proper static core allocation with Checkpoint - it should be done as i assumed.
Regarding dynamic allocation - very interesting thing, i'll read SK that you've posted, but for now, our local support engineer adviced aganist dynamic allocation because there is no certainty if heavily loaded VS's will not starve each other of resources with this dynamic config. I don't like static allocation, cause it basically wastes a lot of cores with multi VS, two site VSLS deployment, but we need to be sure that dynamic one will work well for us first.
Regards
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Could you please elaborate on the concern using dynamic allocation?
Btw, Dynamic Balancing support for Scalable Platforms will be introduced in R81.20.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
One of the problems is that with Dynamic Balancing in combination with VSX AND Maestro, it is only supported from R81.20
We ran into the same problem here and hoped to find an answer on how to assign more cores to the SND processes as we have the default 2 cores running at 95+ most of the day, currently running R80.20SP but will be upgraded upcoming sunday to R81.10 and we need to address the issue in the window we have then.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Could you share how you allocated the static cores according to Check Point? If Dynamic Balancing is not the way to go or not possible, manual configuration is the way to go.
Kind Regards, Jones
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Finally got word that we need to use the following:
g_fw ctl affinity -s -d fwkall #
Where # is the number of core that you want allocated to the FW workers, so if you have 32 cores available and you want 8 core SND the number should be 24.
