Just got a new pair of 6500 appliances with the R80.20 image on them. It appears SMT/Hyperthreading is enabled by default on these. When running 'fw ctl affinity -l' it shows SND allocated to cpus 0 and 4, with FW worker processes in between:
[Expert@fw:0]# fw ctl affinity -l
eth1: CPU 4
eth5: CPU 0
eth2: CPU 0
eth6: CPU 4
eth3: CPU 0
eth4: CPU 4
Kernel fw_0: CPU 7
Kernel fw_1: CPU 3
Kernel fw_2: CPU 6
Kernel fw_3: CPU 2
Kernel fw_4: CPU 5
Kernel fw_5: CPU 1
Daemon in.acapd: CPU 1 2 3 5 6 7
Daemon fwd: CPU 1 2 3 5 6 7
Daemon lpd: CPU 1 2 3 5 6 7
Daemon mpdaemon: CPU 1 2 3 5 6 7
Daemon topod: CPU 1 2 3 5 6 7
Daemon in.asessiond: CPU 1 2 3 5 6 7
Daemon cpd: CPU 1 2 3 5 6 7
Daemon cprid: CPU 1 2 3 5 6 7
@Timothy_Hall , I thought Checkpoint always set the SND/IRQ/Dispatcher cores as the lowest numbered cores. Do you think this is different due to SMT enabled? I also thought it was a bad idea to put SND / worker cores next to each other on the same core pairs (threads?)
What are your thoughts around initial, out of the box performance configuration now that Checkpoint is enabling SMT by default? I'm thinking specifically in R80.20 since that is now Checkpoint's recommended code version. Also, specific to 6000 series as well since Checkpoint is heavily pushing these as the appliances of the future. I'm thinking NGFW blades.
Thank you for any insight you can provide!