- CheckMates
- :
- Products
- :
- CloudMates Products
- :
- CloudMates General
- :
- Re: distribution SNDs in hyper-v environment
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Are you a member of CheckMates?
×- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
distribution SNDs in hyper-v environment
Helo team,
Has anyone gone through this, I have a Check Point environment running on Hyper-v with 24 vCPUs, according to sk106855:
"Hyper-V automatically affines the hv_netvsc interfaces to all CPU cores." But that is not what has happened, checkpoint is
assigning only 2 vCPUs to SNDs and when there is an increase from 700 Mbps to 2 Gbps of bandwidth consumption, we start to have slowness, packet loss, latency, etc. there is no spillover to other CPUs, I don't see new SND processes being started in other instances and those 2 vCPUs are at 100% usage.
We opened a ticket on the TAC, they made the following recommendation: Activate SR-IOV + change the network card for intel or melanox, we followed the recommendation, however the driver does not change (hv_netvsc) remains the same as informed in sk106855 and when trying to assign the driver both via command and file, the settings are not applied.
===
Failed to assign by hand:
[Expert@FW_DC_03:0]# fw ctl affinity -s -i eth0 0
interface eth0 invalid
Check that the eth0 interface is active physical interface
====
I added some pictures of the environment
I know that sk106855 says that this configuration is not supported and that SR-IOV is not certified, but we are following a recommendation from Check Point's RND team. Has anyone gone through a similar case? Have you already received this recommendation from the RND team?
information about this environment
Environment: openserver in Hyper-V (in this hyper-v run only the vmware check point)
vCPU: 24
Version: R80.40 - take 196 - Cluster
Memory: 48 GB
Thank you!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I haven't done much with running gateways in Hyper-V, but I believe the lack of support for vRSS will ensure that all inbound traffic will only go through one of your SND cores. This is evident in your network-per-cpu.png screenshot. You also say "I don't see new SND processes being started in other instances and those 2 vCPUs are at 100% usage" implying you expect the Dynamic Workloads/Split feature to add more SND cores to help handle the load; this feature is only supported on certain Check Point appliances and is not supported in a virtualized environment at this time.
When you scale up traffic levels to 700-2000 the one SND core is probably getting saturated. If you don't have any blades enabled that are calling for Deep Inspection in the Medium Path (APCL/URLF, Threat Prevention), the vast majority of inspection processing will happen in the fastpath on that one SND core making the situation worse. Ironically, enabling more features (pushing most traffic handling into the Medium Path) or forcing traffic into the slowpath via sk104468 will allow the Dynamic Dispatcher to spread out processing across multiple workers/instances and potentially improve performance.
now available at maxpowerfirewalls.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
hello @Timothy_Hall
understood, thank you for the explanation!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Which underlying version of Windows server is used out of interest and was there a recommendation to upgrade from R80.40?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello @Chris_Atkinson
windows server 2019. Well, we already thought, but TAC's words, thinking about performance would not have earn, I'm holding
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Independent of the driver used for the NICs (a problem which will have to be addressed through TAC), have you considered allocating more SND cores?
Since Dynamic Balancing isn't supported in CloudGuard instances, you will need to change the number allocated via cpconfig.
This will need to be done on both cluster members in a maintenance window.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello @PhoneBoy
what TAC gave me with these adjustments (SR-IOV + intel network card) the driver should change in VM and we would be able to have control over the affines, which go to the firewall and which to SNDs, but it still didn't work yet.
Yes, these adjustments were made together with a TAC engineer.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
If all the traffic is hitting one SND core (and not both), that implies the driver that is being used for the NICs doesn't support multi queue.
And yes, that would have to be addressed with TAC.