Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
phlrnnr
Advisor

R80.20, SMT/Hyperthreading, 6000 series

Just got a new pair of 6500 appliances with the R80.20 image on them.  It appears SMT/Hyperthreading is enabled by default on these.  When running 'fw ctl affinity -l' it shows SND allocated to cpus 0 and 4, with FW worker processes in between:

[Expert@fw:0]# fw ctl affinity -l
eth1: CPU 4
eth5: CPU 0
eth2: CPU 0
eth6: CPU 4
eth3: CPU 0
eth4: CPU 4
Kernel fw_0: CPU 7
Kernel fw_1: CPU 3
Kernel fw_2: CPU 6
Kernel fw_3: CPU 2
Kernel fw_4: CPU 5
Kernel fw_5: CPU 1
Daemon in.acapd: CPU 1 2 3 5 6 7
Daemon fwd: CPU 1 2 3 5 6 7
Daemon lpd: CPU 1 2 3 5 6 7
Daemon mpdaemon: CPU 1 2 3 5 6 7
Daemon topod: CPU 1 2 3 5 6 7
Daemon in.asessiond: CPU 1 2 3 5 6 7
Daemon cpd: CPU 1 2 3 5 6 7
Daemon cprid: CPU 1 2 3 5 6 7

@Timothy_Hall , I thought Checkpoint always set the SND/IRQ/Dispatcher cores as the lowest numbered cores.  Do you think this is different due to SMT enabled?  I also thought it was a bad idea to put SND / worker cores next to each other on the same core pairs (threads?)

What are your thoughts around initial, out of the box performance configuration now that Checkpoint is enabling SMT by default?  I'm thinking specifically in R80.20 since that is now Checkpoint's recommended code version.  Also, specific to 6000 series as well since Checkpoint is heavily pushing these as the appliances of the future.  I'm thinking NGFW blades.

Thank you for any insight you can provide!

 

0 Kudos
3 Replies
Timothy_Hall
Champion
Champion

Cores 0-3 are the physical cores.  When SMT/Hyperthreading is enabled Cores 4-7 are added.  Core 4 is the second thread of execution on physical Core 0, Core 5 is the second thread of execution on physical Core 1, etc.  So yes that is completely expected with SMT enabled.  In your case SND/IRQ handling is happening on one physical core with two threads of execution just like it should.

 

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
phlrnnr
Advisor

Thanks for clarifying that for me!  So, with SMT enabled, what is your recommendation for target baseline for SND/CoreXL split for a new deployment?  I'm looking at page 228 in version 2 of your book, and the chart is based on SMT tuned off.  Would you recommend keeping the default split of 2/6 on the 6500 with SMT on?  I would think 3/5 would not be good because then you'd be sharing at least one physical core across SND and CoreXL.

What about the 6800 which has 10 physical / 20 virtual cores?

Thanks for your analysis and thoughts!

0 Kudos
Timothy_Hall
Champion
Champion

The starting split for a 6500 will depend on what blades you have enabled.  If you are using the typical "deep inspection" blades like APCL/URLF and a couple of Threat Prevention blades the default 2/6 split is probably appropriate.  If you are only using Firewall and IPSec VPN blades (and a large percentage of traffic will be accelerated) a 4/4 split would be a good starting point.  Same ratios for a 6800 depending on the blades enabled.

 

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events