Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Kaspars_Zibarts
Employee Employee
Employee

5900 appliance core split between CoreXL and SXL

Just want to hear other opinion. 5900 appliance comes with SMT enabled: 16 hyperthreaded cores that by default are split to 14 for CoreXL and 1 for SXL. We are planning to use 2x10Gb bond as a trunk to the core therefore I was thinking if it would be wiser to use two CPU cores for SXL (having additional redundancy/capacity in case one gets saturated) and leaving us with 12 hyperthreaded cores for CoreXL. Before you ask - I have no idea about throughput levels - it will be deployed as a new segmentation firewall so we have no idea what to expect. Guestimate so far couple of gig. And blade wise we won't go nuts from the start - FW/IPS/AntiBot/IA most likely.

4 Replies
Timothy_Hall
Legend Legend
Legend

I don't have any experience with the new 5900 model yet but are you sure it has 8 physical cores hyperthreaded to 16?  That seems like a lot for that appliance level and Oliver Fink has not yet updated the specs for the 5900 here:

https://lwf.fink.sh/tag/tobias-lachmann/

Providing the output of "cat /proc/cpuinfo" to the website above would help.   What code version are you planning to use, R77.30 or R80.10?  That will make a big difference.

Assuming it does have 8 physical cores it should have a default 2/6 split that extends to 4/12 with hyperthreading enabled.  Systems that have a lot of PXL/F2F traffic are good candidates for Hyperthreading, depending on your IPS Profile (especially if using Default_Protection) you may have lots of traffic being accelerated and handled via the SNDs in which case enabling Hyperthreading can actually hurt performance.  Tough to say what will happen until you put it into production and see how traffic is getting handled with "fwaccel stats -s".  Enabling Hyperthreading is not necessarily a no-brainer.

Based on your post, I'd say try a 3/5 split w/ no hyperthreading initially and assess traffic acceleration levels; any time multiple 10gig interfaces are involved you may need to enable Multi-Queue and/or increase SND cores.

--
My book "Max Power: Check Point Firewall Performance Optimization"
now available via http://maxpowerfirewalls.com.

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
Kaspars_Zibarts
Employee Employee
Employee

BTW, thanks for the book - got i as soon as it came out Smiley Happy

Victor_MR
Employee Employee
Employee

As Tim has suggested, it will depend on the amount of accelerated traffic (which you can monitor using "fwaccel stats -s").

If most of the traffic is going to be inspected by Anti-Bot and IPS, it probably will be handled by the CoreXL fw workers, and the initial configuration may be fine. If there is a lot of FW-only traffic, then you will probably need to allocate more CPU resources to SND. Also, in this case as you have 10Gbps interfaces, monitor the usage of the CPU assigned to each physical interface, and enable multiqueue if the CPU usage is high. If you expect a lot of FW-only traffic through each physical interface, I would consider enabling multiqueue from the beginning.


Kaspars_Zibarts
Employee Employee
Employee

Thanks for prompt reply guys. Taking all opinions and information that we have regarding traffic profile and volumes (hardly any..) in consideration we went with 4/12 split.

One thing that I have been told by checkpoint is that there is no real gain in performance using hyperthreading for SND so you may as well just use the "primary" core. That's why I mentioned 2/12 split in the original post, should have said that. In 5900 case core 0 is HTed to 0+8 and 1 to 1+9. So for SND affinity I use only 0 and 1 leaving 8 and 9 unused:

[Expert@firewall:0]# fw ctl affinity -l
eth1-01: CPU 0
eth1-02: CPU 1
eth1-03: CPU 0
eth1-04: CPU 1
eth1: CPU 1
eth5: CPU 1
eth2: CPU 0
eth6: CPU 0
eth3: CPU 1
eth7: CPU 1
eth4: CPU 0
Mgmt: CPU 0
Kernel fw_0: CPU 15
Kernel fw_1: CPU 7
Kernel fw_2: CPU 14
Kernel fw_3: CPU 6
Kernel fw_4: CPU 13
Kernel fw_5: CPU 5
Kernel fw_6: CPU 12
Kernel fw_7: CPU 4
Kernel fw_8: CPU 11
Kernel fw_9: CPU 3
Kernel fw_10: CPU 10
Kernel fw_11: CPU 2

This theory should stand as the default affinity settings shipped with the box with 2/14 split only used core 0 for SND. The "twin" core 8 was left unsed

Definitely can update core info on Tobias site Smiley Happy - sinc I have used it a lot myself in past when planning firewall purchases. 

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events