- Products
- Learn
- Local User Groups
- Partners
-
More
Join Us for CPX 360
23-24 February 2021
Check Point Harmony
Highest Level of Security for Remote Users
Important certificate update to CloudGuard Controller, CME,
and Azure HA Security Gateways
Advanced Protection for
Small and Medium Business
Secure Endpoints from
the Sunburst Attack
Important! R80 and R80.10
End Of Support around the corner (May 2021)
Hi,
I have a VM running R80.30 Openserver with 1 NIC. We use VLANs to segregate the inside/outside zones so we only assigned 1 NIC to the VM. I recently increased the cores to 8. I'm running 6 firewall instances. When checking on the processor util CPU0 is well utilized, CPU1 is under utilized and the rest are pretty evenly utilized.
Firewall instances are running on CPU2-7
The "Network-per-CPU" shows most of the SecureXL traffic is being handled by CPU0.
Is there some way to distribute the SND processing more evenly between CPU0 and CPU1 without adding another NIC?
Multi-Queue can spread the load across the 2 SNDs, but it will depend on whether your underlying interface type supports it. e1000 does not, while vmxnet3 does. Use ethtool -i (interface) to check this.
Thanks Timothy,
The interface is Virtio.
[Expert@fwcp1:0]# ethtool -i eth0
driver: virtio_net
version:
firmware-version:
bus-info: virtio3
Virtio does support multiqueue. I didn't see any mention of Virtio in the documentation, but I'll give it a go in a lab before I try it on my production system.
Actually this SK says that Multi-Queue is supported for virtio_net (and ena) which is news to me, those must have been added recently:
sk153373: Multi-Queue Management for Check Point Security Gateway
Gaia 3.10 is required. You are probably using Gaia 2.6.18 with R80.30 although there is a Gaia 3.10 build of R80.30 available. Or just go to R80.40 which is all Gaia 3.10.
Hi @waynej,
I agree with @Timothy_Hall. But I have one more small note. From a performance point of view, it is better to use two or more network cards. This allows you to reach a higher packet rate. More read here: R80.x - Performance Tuning Tip - Intel Hardware
You should use more SND's and use Multi Queueing. More read here: R80.x - Performance Tuning Tip - Multi Queue
New supported Multi Queue drivers:
Driver | GAIA version | Speed [Gbps] | Description | Maximal Number of RX Queues |
igb | R80.10+ | 1 | Intel® PCIe 1 Gbps | 2-16 (depends on the interface) |
ixgbe | R80.10+ | 10 | Intel® PCIe 10 Gbps | 16 |
i40e | R80.10+ | 40 | Intel® PCIe 40 Gbps | 64 |
i40evf | R81 | 40 | Intel® i40e 40 Gbps | 4 |
mlx5_core | R80.10+ | 40 | Mellanox® ConnectX® mlx5 core driver | 60 |
ena | R81 | 20 | Elastic Network Adapter in Amazon® EC2 | configured automatically |
virtio_net | R81 | 10 | VirtIO paravirtualized device driver from KVM® | configured automatically |
vmxnet3 | R80.40+ | 10 | VMXNET Generation 3 driver from VMware® | configured automatically |
@heiko_ploehn- Thanks for the links. From a quick read it looked good. I'll take some time in the near future to go through it in more details.
It seems like Multi-queue might only be an option for this firewall when we upgrade it to R80.40/R81.
Thanks for the response.
About CheckMates
Learn Check Point
Advanced Learning
WELCOME TO THE FUTURE OF CYBER SECURITY