Helo team,
Has anyone gone through this, I have a Check Point environment running on Hyper-v with 24 vCPUs, according to sk106855:
"Hyper-V automatically affines the hv_netvsc interfaces to all CPU cores." But that is not what has happened, checkpoint is
assigning only 2 vCPUs to SNDs and when there is an increase from 700 Mbps to 2 Gbps of bandwidth consumption, we start to have slowness, packet loss, latency, etc. there is no spillover to other CPUs, I don't see new SND processes being started in other instances and those 2 vCPUs are at 100% usage.
We opened a ticket on the TAC, they made the following recommendation: Activate SR-IOV + change the network card for intel or melanox, we followed the recommendation, however the driver does not change (hv_netvsc) remains the same as informed in sk106855 and when trying to assign the driver both via command and file, the settings are not applied.
===
Failed to assign by hand:
[Expert@FW_DC_03:0]# fw ctl affinity -s -i eth0 0
interface eth0 invalid
Check that the eth0 interface is active physical interface
====
I added some pictures of the environment
I know that sk106855 says that this configuration is not supported and that SR-IOV is not certified, but we are following a recommendation from Check Point's RND team. Has anyone gone through a similar case? Have you already received this recommendation from the RND team?
information about this environment
Environment: openserver in Hyper-V (in this hyper-v run only the vmware check point)
vCPU: 24
Version: R80.40 - take 196 - Cluster
Memory: 48 GB
Thank you!