Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
waynej
Participant

CoreXL: SND on R80.30 Openserver 8 CPUs and 1 NIC

Hi,

I have a VM running R80.30 Openserver with 1 NIC.  We use VLANs to segregate the inside/outside zones so we only assigned 1 NIC to the VM.  I recently increased the cores to 8.  I'm running 6 firewall instances.  When checking on the processor util CPU0 is well utilized, CPU1 is under utilized and the rest are pretty evenly utilized.

20201116_top.png

Firewall instances are running on CPU2-7

20201116_fw_instance_affinity.png

The "Network-per-CPU" shows most of the SecureXL traffic is being handled by CPU0. 

20201116_SXL_Network-per-CPU.png

 

 

 

 

 

 

Is there some way to distribute the SND processing more evenly between CPU0 and CPU1 without adding another NIC?

0 Kudos
5 Replies
Timothy_Hall
Champion
Champion

Multi-Queue can spread the load across the 2 SNDs, but it will depend on whether your underlying interface type supports it.  e1000 does not, while vmxnet3 does.  Use ethtool -i (interface) to check this.

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
waynej
Participant

Thanks Timothy,

The interface is Virtio.

[Expert@fwcp1:0]# ethtool -i eth0
driver: virtio_net
version:
firmware-version:
bus-info: virtio3

Virtio does support multiqueue.  I didn't see any mention of Virtio in the documentation, but I'll give it a go in a lab before I try it on my production system.

0 Kudos
Timothy_Hall
Champion
Champion

Actually this SK says that Multi-Queue is supported for virtio_net (and ena) which is news to me, those must have been added recently:

sk153373: Multi-Queue Management for Check Point Security Gateway

Gaia 3.10 is required.  You are probably using Gaia 2.6.18 with R80.30 although there is a Gaia 3.10 build of R80.30 available.  Or just go to R80.40 which is all Gaia 3.10.

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
HeikoAnkenbrand
Champion Champion
Champion

Hi @waynej,

I agree with @Timothy_Hall. But I have one more small note. From a performance point of view, it is better to use two or more network cards. This allows you to reach a higher packet rate. More read here: R80.x - Performance Tuning Tip - Intel Hardware

You should use more SND's and use Multi Queueing. More read here: R80.x - Performance Tuning Tip - Multi Queue

New supported Multi Queue drivers:

Driver GAIA version Speed [Gbps] Description Maximal Number of RX Queues
igb R80.10+ 1 Intel® PCIe 1 Gbps 2-16 (depends on the interface)
ixgbe R80.10+ 10 Intel® PCIe 10 Gbps 16
i40e R80.10+ 40 Intel® PCIe 40 Gbps 64
i40evf R81 40 Intel® i40e 40 Gbps 4
mlx5_core R80.10+ 40 Mellanox® ConnectX® mlx5 core driver 60
ena R81 20 Elastic Network Adapter in Amazon® EC2 configured automatically
virtio_net R81 10 VirtIO paravirtualized device driver from KVM® configured automatically
vmxnet3 R80.40+ 10 VMXNET Generation 3 driver from VMware® configured automatically

 

➜ CCSM Elite, CCME, CCTE
waynej
Participant

@heiko_ploehn- Thanks for the links.  From a quick read it looked good.  I'll take some time in the near future to go through it in more details.

It seems like Multi-queue might only be an option for this firewall when we upgrade it to R80.40/R81.

Thanks for the response.

0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events