Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Mattias_Jansson
Collaborator
Jump to solution

Affinity and multi queue on vsx vsls

Hi!

I am about to replace a R80.30 vsx vsls cluster to R81.10 (from 13800 to 16200 appliances)
The default multi queue and affinity is like below which I believe will be perfectly fine in my environment.

[Expert@vsx20:0]# mq_mng -o
Total 48 cores. Available for MQ 4 cores
i/f driver driver mode state mode (queues) cores
actual/avail
------------------------------------------------------------------------------------------------
Mgmt igb Kernel Up Auto (4/4) 0,24,1,25
Sync igb Kernel Up Auto (4/4) 0,24,1,25
eth1-01 igb Kernel Up Auto (4/4) 0,24,1,25
eth2-01 i40e Kernel Up Auto (4/4) 0,24,1,25
eth2-02 i40e Kernel Up Auto (4/4) 0,24,1,25
eth3-01 i40e Kernel Up Auto (4/4) 0,24,1,25
eth3-02 i40e Kernel Up Auto (4/4) 0,24,1,25

[Expert@vsx20:0]# fw ctl affinity -l
VS_0 fwk: CPU 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
VS_1 fwk: CPU 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
VS_2 fwk: CPU 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
VS_3 fwk: CPU 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
VS_4 fwk: CPU 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
VS_5 fwk: CPU 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
VS_6 fwk: CPU 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
VS_7 fwk: CPU 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
VS_8 fwk: CPU 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
Interface eth3-01: has multi queue enabled
Interface Mgmt: has multi queue enabled
Interface Sync: has multi queue enabled
Interface eth1-01: has multi queue enabled
Interface eth3-02: has multi queue enabled
Interface eth2-01: has multi queue enabled
Interface eth2-02: has multi queue enabled


In case I need more SND cores I believe the preferred setup would be
MQ: 0-1 12-13 24-25 36-37
Affinity: 2-11 14-23 26-35 38-47

I have tried to find the commands to configure that but I fail.

Anyone?

Best regards
Mattias

0 Kudos
2 Solutions

Accepted Solutions
Chris_Atkinson
Employee Employee
Employee

Have you considered using Dynamic Balancing (sk164155) to manage elements of  this?

Otherwise (specific affinity asside), set new number of FWK

fw ctl affinity -s -d -fwkall XX

reboot

CCSM R77/R80/ELITE

View solution in original post

Daniel_Szydelko
Collaborator
Collaborator

Hi,

Yes, this guide is based on regular gateway - the idea is the same but different cmds needs to be set for VSX.

Let's say based on 16200:

fw ctl affinity -s -d -fwkall 39

mq_mng -s manual -c 0-1 12-13 24-25 36-37

reboot

If needed then check and re-configure FWK cores to avoid collisions with SND:

fw ctl affinity -s -d -vsid x-y -cpu 2-11 14-23 26-35 38-47, where x-y is range of your VS's

you can also exclude some cores for heavy utilized processes like FWD and assign them for particular VS ex. for VS2:

fw ctl affinity -s -d -vsid x-y -cpu 2-11 14-23 26-35 38-46

fw ctl affinity -s -d -pname fwd -vsid 2 -cpu 47

BR

Daniel.

View solution in original post

0 Kudos
14 Replies
Ilya_Yusupov
Employee
Employee

The amount of SND's is set throught VSX itself, try to change it via cpconfig on VS0 context.

0 Kudos
Mattias_Jansson
Collaborator

I thought that corexl should be disabled on vs0 in vsx?

Configuration Options:
----------------------
(1) Licenses and contracts
(2) SNMP Extension
(3) PKCS#11 Token
(4) Random Pool
(5) Secure Internal Communication
(6) Disable cluster membership for this gateway
(7) Disable Check Point Per Virtual System State
(8) Disable Check Point ClusterXL for Bridge Active/Standby
(9) Hyper-Threading
(10) Check Point CoreXL
(11) Automatic start of Check Point Products

(12) Exit

Enter your choice (1-12) :10

 

Configuring Check Point CoreXL...
=================================


CoreXL is currently disabled.


Would you like to enable CoreXL (y/n) [y] ?

 

0 Kudos
Chris_Atkinson
Employee Employee
Employee

Have you considered using Dynamic Balancing (sk164155) to manage elements of  this?

Otherwise (specific affinity asside), set new number of FWK

fw ctl affinity -s -d -fwkall XX

reboot

CCSM R77/R80/ELITE
Mattias_Jansson
Collaborator

Dynamic balancing is a great feature for vsx, but I don't dare to enable it yet in production.
I tried fw ctl affinity -s -d -fwkall XX, but then fwk used the same core as snd?

 

0 Kudos
genisis__
Leader Leader
Leader

I'm using it on VSX.  It seems to work great, however there is a bug related to starting dynamic balancing after a reboot.  This is now resolved I believe (might be in JHFA75 which has just been released as ongoing)

I would say in most cases Dynamic balancing aims to balance SND cores  around 50% overall utilisation during peak hours.

0 Kudos
CheckPointerXL
Advisor

Hello can you explain Better the bug?

 

0 Kudos
AmitShmuel
Employee
Employee

Hi Mattias,

Coming R81.20 will have Dynamic Balancing on by default on VSX.
I'd be happy to ease any concern you have regarding production enablement.

Feel free to approach me at amitshm@checkpoint.com.

Thanks,
Amit

Mattias_Jansson
Collaborator

Hi!

When it comes to Check point i always try to use the default settings when possible to avoid issues.
That is why I don't want to enable Dynamic balancing as it is off by default off in R81.10

Good news that is enabled by default in R81.20. It will be nice not to deal with core tuning in the future.

/Mattias

0 Kudos
Mattias_Jansson
Collaborator

When using mq_mng in auto mode, it works perfectly fine with the fw ctl affinity -s -d -fwkall XX command. With or without hyper threading enabled. The cores are setup correctly automatic after reboot.
Thank you!

 

0 Kudos
Henrik_Noerr1
Advisor

use 'cat /proc/cpuinfo' - this will show the physical cores layout.

This matters regarding how L2 cache is shared on the cores.

https://www.quora.com/How-is-L2-cache-shared-between-different-cores-in-a-CPU

If this matters in a SND/FWK spread I have no idea, but something Dynamic Balancing surely does not take into account.

I would suspect heavily utilized multithreated virtual systems benefit from sharing L2 cores. But not sure even if Check Point knows.

Internally we have always assigned VSs and SND based on this, simply due to common sense.

0 Kudos
Mattias_Jansson
Collaborator

The snd/fwk suggestion I wrote is taken from Performance tuning R81.10 Administration Guide: https://sc1.checkpoint.com/documents/R81.10/WebAdminGuides/EN/CP_R81.10_PerformanceTuning_AdminGuide...
The syntax for changing the snd/fwk doesn't seem to apply for vsx though?

 

0 Kudos
Daniel_Szydelko
Collaborator
Collaborator

Hi,

Yes, this guide is based on regular gateway - the idea is the same but different cmds needs to be set for VSX.

Let's say based on 16200:

fw ctl affinity -s -d -fwkall 39

mq_mng -s manual -c 0-1 12-13 24-25 36-37

reboot

If needed then check and re-configure FWK cores to avoid collisions with SND:

fw ctl affinity -s -d -vsid x-y -cpu 2-11 14-23 26-35 38-47, where x-y is range of your VS's

you can also exclude some cores for heavy utilized processes like FWD and assign them for particular VS ex. for VS2:

fw ctl affinity -s -d -vsid x-y -cpu 2-11 14-23 26-35 38-46

fw ctl affinity -s -d -pname fwd -vsid 2 -cpu 47

BR

Daniel.

0 Kudos
AmitShmuel
Employee
Employee

Hi,

Dynamic Balancing is aware of NUMA and HTT, and maintains their order symmetrically.
i.e.:
- It adds SNDs on physical CPUs (2 logical cores are added on each change)
- When adding SNDs, it strives to maintain an equal amount per socket

0 Kudos
Henrik_Noerr1
Advisor

Thank you for disproving my uneducated guess! 🙂

0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events