I have a cluster with two gateways with 6 cores running r80.40 take 91
Before I configured mdps, 3 cores were running the fw processes and the interfaces configured were running multiqueue as expected.
Then I configured mdps with two interfaces. These two interfaces are configured with cpu "all" and therefore overlap with the cores.
Why not the interfaces configured in the management plane are configure with multiqueue or with a specific cpu not shared with the fw processes?
sim affinity -l
eth0 : All
eth2 : All
eth3 : All
Multi queue interfaces: eth5 eth6
fw ctl affinity -l
Kernel fw_0: CPU 5
Kernel fw_1: CPU 2
Kernel fw_2: CPU 4
Daemon mpdaemon: CPU 2 4 5
Daemon fwd: CPU 2 4 5
Daemon in.acapd: CPU 2 4 5
Daemon lpd: CPU 2 4 5
Daemon in.asessiond: CPU 2 4 5
Daemon vpnd: CPU 2 4 5
Daemon wsdnsd: CPU 2 4 5
Daemon rad: CPU 2 4 5
Daemon usrchkd: CPU 2 4 5
Daemon in.geod: CPU 2 4 5
Daemon cprid: CPU 2 4 5
Daemon cpd: CPU 2 4 5
Interface eth5: has multi queue enabled
Interface eth6: has multi queue enabled