Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
guiausechi
Participant
Participant

distribution SNDs in hyper-v environment

Helo team,

Has anyone gone through this, I have a Check Point environment running on Hyper-v with 24 vCPUs, according to sk106855:
"Hyper-V automatically affines the hv_netvsc interfaces to all CPU cores." But that is not what has happened, checkpoint is
assigning only 2 vCPUs to SNDs and when there is an increase from 700 Mbps to 2 Gbps of bandwidth consumption, we start to have slowness, packet loss, latency, etc. there is no spillover to other CPUs, I don't see new SND processes being started in other instances and those 2 vCPUs are at 100% usage.

We opened a ticket on the TAC, they made the following recommendation: Activate SR-IOV + change the network card for intel or melanox, we followed the recommendation, however the driver does not change (hv_netvsc) remains the same as informed in sk106855 and when trying to assign the driver both via command and file, the settings are not applied.

===
Failed to assign by hand:

[Expert@FW_DC_03:0]# fw ctl affinity -s -i eth0 0

interface eth0 invalid

Check that the eth0 interface is active physical interface

====


I added some pictures of the environment

I know that sk106855 says that this configuration is not supported and that SR-IOV is not certified, but we are following a recommendation from Check Point's RND team. Has anyone gone through a similar case? Have you already received this recommendation from the RND team?

information about this environment

Environment: openserver in Hyper-V (in this hyper-v run only the vmware check point)
vCPU: 24
Version: R80.40 - take 196 - Cluster
Memory: 48 GB

Thank you!

0 Kudos
7 Replies
Timothy_Hall
Champion Champion
Champion

I haven't done much with running gateways in Hyper-V, but I believe the lack of support for vRSS will ensure that all inbound traffic will only go through one of your SND cores.  This is evident in your network-per-cpu.png screenshot.  You also say "I don't see new SND processes being started in other instances and those 2 vCPUs are at 100% usage" implying you expect the Dynamic Workloads/Split feature to add more SND cores to help handle the load; this feature is only supported on certain Check Point appliances and is not supported in a virtualized environment at this time.

When you scale up traffic levels to 700-2000 the one SND core is probably getting saturated.  If you don't have any blades enabled that are calling for Deep Inspection in the Medium Path (APCL/URLF, Threat Prevention), the vast majority of inspection processing will happen in the fastpath on that one SND core making the situation worse.  Ironically, enabling more features (pushing most traffic handling into the Medium Path) or forcing traffic into the slowpath via sk104468 will allow the Dynamic Dispatcher to spread out processing across multiple workers/instances and potentially improve performance.

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
0 Kudos
guiausechi
Participant
Participant

hello @Timothy_Hall 

understood, thank you for the explanation!

0 Kudos
Chris_Atkinson
Employee Employee
Employee

Which underlying version of Windows server is used out of interest and was there a recommendation to upgrade from R80.40?

CCSM R77/R80/ELITE
0 Kudos
guiausechi
Participant
Participant

Hello @Chris_Atkinson 

windows server 2019. Well, we already thought, but TAC's words, thinking about performance would not have earn, I'm holding

0 Kudos
PhoneBoy
Admin
Admin

Independent of the driver used for the NICs (a problem which will have to be addressed through TAC), have you considered allocating more SND cores?
Since Dynamic Balancing isn't supported in CloudGuard instances, you will need to change the number allocated via cpconfig.
This will need to be done on both cluster members in a maintenance window. 

0 Kudos
guiausechi
Participant
Participant

Hello @PhoneBoy 


what TAC gave me with these adjustments (SR-IOV + intel network card) the driver should change in VM and we would be able to have control over the affines, which go to the firewall and which to SNDs, but it still didn't work yet.

Yes, these adjustments were made together with a TAC engineer.

0 Kudos
PhoneBoy
Admin
Admin

If all the traffic is hitting one SND core (and not both), that implies the driver that is being used for the NICs doesn't support multi queue.
And yes, that would have to be addressed with TAC.

0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.