Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Daniel_Ionut_Ba
Explorer

COREXL affinity mismach on R81.20 VSX Cluster

Salut Checkmates,

We have the following setup in our infrastructure:
Open Server VSX cluster, R81.20 Take 76, with 4 nodes with 16 cores each where we have configured 106 VS, we have around 26 VS active on each node. The affinity is configured as default on all 4 of them, still on one node the default affinity it doesn't look good and all the FWK are assigned to only one CPU 7.
The second issue we have is that some cores are assigned only to V0 (on the other 3 nodes) and cores 2 4 6 are always between 80% and 100%
I have a TAC ticket open for the issue, but I would like to know if anyone had similar problems, or for some reason, i hit a bug?

 

[Expert@AAA:0]# fw ctl affinity -l
VS_0 fwk: CPU 2 3 4 5 6 7 8 9 10 11 12 13 14 15
VS_1 fwk: CPU 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Interface eth0: has multi queue enabled
Interface eth5: has multi queue enabled
Interface eth4: has multi queue enabled
Interface eth1: has multi queue enabled
Interface eth7: has multi queue enabled
Interface eth6: has multi queue enabled
Interface eth2: has multi queue enabled
Interface eth3: has multi queue enabled

 

[Expert@BBB:0]# fw ctl affinity -l
VS_0 fwk: CPU 2 3 4 5 6 7
VS_1 fwk: CPU 1 3 5 7 9 11 13 15
VS_2 fwk: CPU 1 3 5 7 9 11 13 15
VS_3 fwk: CPU 1 3 5 7 9 11 13 15
VS_4 fwk: CPU 1 3 5 7 9 11 13 15
VS_5 fwk: CPU 1 3 5 7 9 11 13 15
VS_6 fwk: CPU 1 3 5 7 9 11 13 15
VS_7 fwk: CPU 1 3 5 7 9 11 13 15
VS_8 fwk: CPU 1 3 5 7 9 11 13 15
VS_9 fwk: CPU 1 3 5 7 9 11 13 15
VS_10 fwk: CPU 1 3 5 7 9 11 13 15
...........................

VS_104 fwk: CPU 1 3 5 7 9 11 13 15
VS_105 fwk: CPU 1 3 5 7 9 11 13 15
VS_106 fwk: CPU 1 3 5 7 9 11 13 15
Interface eth0: has multi queue enabled
Interface eth5: has multi queue enabled
Interface eth4: has multi queue enabled
Interface eth1: has multi queue enabled
Interface eth7: has multi queue enabled
Interface eth6: has multi queue enabled
Interface eth2: has multi queue enabled
Interface eth3: has multi queue enabled

 

 

[Expert@AAA:0]# fw ctl affinity -l -x | grep fwk
 427 | 82 | 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | | | | | fwk_wd
643 | 82 | 7 | | | | | fwk
923 | 6 | 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | | | | | fwk_wd
960 | 6 | 7 | | | | | fwk
1021 | 32 | 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | | | | | fwk_wd
1037 | 32 | 7 | | | | | fwk
1184 | 24 | 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | | | | | fwk_wd
1204 | 76 | 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | | | | | fwk_wd
1230 | 76 | 7 | | | | | fwk
1569 | 86 | 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | | | | | fwk_wd
1634 | 86 | 7 | | | | | fwk
1788 | 3 | 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | | | | | fwk_wd
1833 | 3 | 7 | | | | | fwk
1853 | 63 | 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | | | | | fwk_wd
1971 | 63 | 7 | | | | | fwk
2142 | 36 | 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | | | | | fwk_wd
2152 | 26 | 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | | | | | fwk_wd
2166 | 7 | 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | | | | | fwk_wd
2168 | 36 | 7 | | | | | fwk
2252 | 7 | 7 | | | | | fwk
2290 | 26 | 7 | | | | | fwk

 

[Expert@DEFRA6-BBB:0]# fw ctl affinity -l -x | grep fwk
| 310 | 90 | 1 2 3 4 5 6 7 9 11 13 15 | | | | | fwk_wd
| 342 | 90 | 1 3 5 7 9 11 13 15 | P | | | | fwk
| 719 | 14 | 1 2 3 4 5 6 7 9 11 13 15 | | | | | fwk_wd
| 732 | 14 | 1 3 5 7 9 11 13 15 | P | | | | fwk
| 1215 | 43 | 1 2 3 4 5 6 7 9 11 13 15 | | | | | fwk_wd
| 1465 | 5 | 1 2 3 4 5 6 7 9 11 13 15 | | | | | fwk_wd
| 1613 | 43 | 1 3 5 7 9 11 13 15 | P | | | | fwk
| 1908 | 5 | 1 3 5 7 9 11 13 15 | P | | | | fwk
| 1974 | 77 | 1 2 3 4 5 6 7 9 11 13 15 | | | | | fwk_wd
| 2058 | 94 | 1 2 3 4 5 6 7 9 11 13 15 | | | | | fwk_wd
| 2100 | 77 | 1 3 5 7 9 11 13 15 | P | | | | fwk
| 2158 | 87 | 1 2 3 4 5 6 7 9 11 13 15 | | | | | fwk_wd
| 2335 | 94 | 1 3 5 7 9 11 13 15 | P | | | | fwk
| 2374 | 87 | 1 3 5 7 9 11 13 15 | P | | | | fwk
| 2716 | 0 | 7 | | | | | fwk_forke
| 2725 | 0 | 1 2 3 4 5 6 7 9 11 13 15 | | | | | fwk_wd
| 2737 | 0 | 1 2 3 4 5 6 7 9 11 13 15 | P |*| | | fwk
| 2809 | 91 | 1 2 3 4 5 6 7 9 11 13 15 | | | | | fwk_wd
| 2816 | 30 | 1 2 3 4 5 6 7 9 11 13 15 | | | | | fwk_wd
| 2882 | 48 | 1 2 3 4 5 6 7 9 11 13 15 | | | | | fwk_wd
| 2972 | 91 | 1 3 5 7 9 11 13 15 | P | | | | fwk
| 2997 | 30 | 1 3 5 7 9 11 13 15 | P | | | | fwk
| 3121 | 57 | 1 2 3 4 5 6 7 9 11 13 15 | | | | | fwk_wd
| 3141 | 57 | 1 3 5 7 9 11 13 15 | P | | | | fwk
| 3540 | 31 | 1 2 3 4 5 6 7 9 11 13 15 | | | | | fwk_wd
| 3543 | 48 | 1 3 5 7 9 11 13 15 | P | | | | fwk

0 Kudos
1 Reply
emmap
Employee
Employee

It looks like someone has been playing with affinities in the past. I recommend starting by resetting them to defaults.

To reset the affinities to defaults: fw ctl affinity -vsx_factory_defaults

This will prompt to reboot the box you run it on, so do it in a change window. The VS failovers/failbacks will be stateful. You should then have everything back to normal on that box. 

If you ever want to adjust the SND/CXL pool core split on a VSX gateway, is this command:

fw ctl affinity -s -d -fwkall <Number of CPUs>

That sets the CXL core pool amount, the rest of the cores will be set to SND/Multi-queue. The other, per-VS affinity commands override that core split assignment for specific VSs, and are generally unnecessary. I've never seen a VSX environment that required anything other than tweaking the core split with the command above, and with R81.20 on supported CP appliances even that is not required anymore thanks to dynamic balancing.

See: https://sc1.checkpoint.com/documents/R81.20/WebAdminGuides/EN/CP_R81.20_PerformanceTuning_AdminGuide...

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events