Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted

VSX Affinity Question?

I have tested a lot in the lab with vsx and affinity over the last days. Now a question has come up which I cannot explain 100%.

Way 1:
When I search the internet, they all say that the CoreXL instances must be assigned to the Core's. Typical assignment:
fw ctl affinity -s -d -vsid 0 -cpu 1
fw ctl affinity -s -d -vsid 1 -cpu 1
fw ctl affinity -s -d -vsid 2 -cpu 2
fw ctl affinity -s -d -vsid 3 -cpu 3

After that my affinity looks like this:
vsx2.JPG

Way 2:
From my point of view, it would be better to distribute the fwk process to the cores as well. For this purpose I have set the following:
fw ctl affinity -s -d -pname fwk -vsid 0 -cpu 1
fw ctl affinity -s -d -pname fwk -vsid 1 -cpu 1
fw ctl affinity -s -d -pname fwk -vsid 2 -cpu 2
fw ctl affinity -s -d -pname fwk -vsid 3 -cpu 3

Here the affinity looks as follows:
vsx3.JPG

Now the question arises for me, which of the two ways is the better one?


PS:
Top shows the fwkX processes:
vsx4.JPG

A small calculation sample for the utilization of process fwkX:

vsx5.JPG

- fwk0_X -> fw instance thread that takes care for the packet processing
- fwk0_dev_X -> the thread that takes care for communication between fw instances and other CP daemons
- fwk0_kissd -> legacy Kernel Infrastructure (obsolete)
- fwk0_hp -> (high priority) cluster thread

Labels (1)
Tags (1)
7 Replies
Highlighted

Hi @PhoneBoy @_Val_,

Is there someone at Check Point who can answer the VSX affinity question?

Regards
Heiko

Tags (1)
0 Kudos
Highlighted
Admin
Admin

As far as I am concerned, the last second option is not required. Affinity for FWK is handled by the first approach automatically. US affinity with pname is for other processes. 

0 Kudos
Highlighted

In the user space firewall, this is basically a process that now calls several threads that work on several cores. It just brought up my question. I did the same with all my VSX installations as you described in the history.

Here an example from the LAB:

Test 1:
# fw ctl affinity -s -d -pname fwk -vsid 0 -cpu 1
# fw ctl affinity -s -d -pname fwk -vsid 1 -cpu 1-3
# fw ctl affinity -s -d -pname fwk -vsid 2 -cpu 2
# fw ctl affinity -s -d -pname fwk -vsid 3 -cpu 3
# reboot
ps1.JPG

Test 2:
# fw ctl affinity -s -d -vsid 0 -cpu 1
# fw ctl affinity -s -d -vsid 1 -cpu 1-3
# fw ctl affinity -s -d -vsid 2 -cpu 2
# fw ctl affinity -s -d -vsid 3 -cpu 3
# reboot
ps2.JPG

When you do research here at LAB, even more amazing things come out. No matter which command I use, the instances always remain the same on Linux level. At the process level, I can't see any difference. See pictures above. I have a system with 4 cores and all instances are assigned to all cores on process level. Whether I change the affinity with one or the other command here.

I would expect to see even one thread process if I set only one over the affinity. I don't understand that. Hmmmm

 

 

Tags (1)
Highlighted
Admin
Admin

If this is an R80.40 box, I think what you're seeing with fwk on all CPUs is expected behavior.
We have a mechanism to dynamically allocate the worker/SND split without a reboot.
As I recall, we basically allocate all core/CPUs to both functions and just dynamically use them (or not) depending on system demand.
0 Kudos
Highlighted

Hi @PhoneBoy ,

With R80.40 this would be a logical explanation.

But it is an R80.30 VSX LS cluster.

 

Tags (1)
0 Kudos
Highlighted
Admin
Admin

Is it 3.10 kernel or 2.6 kernel?
0 Kudos
Highlighted

2.6 kernel

af1.JPG

 

0 Kudos