- Products
- Learn
- Local User Groups
- Partners
- More
Quantum Spark Management Unleashed!
Introducing Check Point Quantum Spark 2500:
Smarter Security, Faster Connectivity, and Simpler MSP Management!
Check Point Named Leader
2025 Gartner® Magic Quadrant™ for Hybrid Mesh Firewall
HTTPS Inspection
Help us to understand your needs better
CheckMates Go:
SharePoint CVEs and More!
I’ve noticed that in a VSX environment the virtual switches don’t seem to achieve very high packet throughput.
In practice, I can’t get more than around 4–5 Gbps over a wrp interface. When I connect the same setup using physical switches + 100Gbps transceiver and enable multiqueueing, I can reach about 80 Gbps on a 100 Gbps interface .
The setup I tested consists of two scenarios with two 29000 appliances for VSX LS:
In the first one, Virtual System 1 is connected to Virtual System 2 through a virtual switch within the VSX environment.
In the second scenario, Virtual System 1 and Virtual System 2 are connected through a physical switch.
This allows me to directly compare the performance of traffic flows when using virtual switching versus physical switching.
So my questions are:
What are the limitations of virtual switches in VSX regarding throughput?
Are virtual switches vs. wrp (Warp Link) capable of using multiple cores for packet forwarding?
This post describes a similar experience with inter-VS traffic utilizing virtual switches and wrp interfaces.
The virtual switch path, processed by the VSX kernel, doesn't provide the same performance as the physical switch path.
Did you verify CPU usage for FWK during your tests with this oneliner?
The fwk_dev process, responsible for virtual switch traffic, can spike above 100% CPU usage, indicating single-core saturation. It might also not offload to SecureXL’s Fast Path or Medium Path and is subject to user-space inspection, increasing latency and CPU usage. Traffic routed internally between VSs often falls into F2F path, which is the slowest. @Timothy_Hall might be able to elaborate on this further.
Tools like mq_mng, cpview, and fw ctl affinity offer limited visibility and control over virtual switch performance.
In a VSX setup, virtual systems share the same physical resources and compete for CPU and memory bandwidth, which adds latency and reduces overall throughput.
Multi-Queue is supported in VSX environments, but its effectiveness depends on: The NIC driver and its queue capabilities, the number of available SND cores, whether multi-queue is configured in VS0 when interfaces are shared across VSs.
Summary:
Feature | Physical Interface | Virtual Switch (wrp) |
---|---|---|
SecureXL Fast Path | ✅ Fully supported | ❌ Not applied |
Multi-Queue | ✅ Supported | ❌ Not applicable |
CoreXL Scaling | ✅ Multi-core | ⚠️ Limited |
Throughput | ✅ High | ⚠️ Limited |
I understand the facts and they are clear to me.
My question is about the throughput you can realistically plan for when using WRP interfaces. In classic 1G environments it was never an issue to work with virtual switches, but the situation changes when dealing with 10G/25G/40G/100G.
Up to which throughput does Check Point recommend using virtual switches with WRP interfaces, and at what point should one switch to dedicated physical switching hardware?
Are there any recommendations from Check Point here?
I've never really dug into the networking guts of VSX, but the fact you are topping out at 4–5 Gbps may be significant; once the new 10Gbps interfaces hit the scene, total throughput would max out at about 5Gbps if Multi-Queue was not available or not enabled for the interface, as the SND core handling hit 100%. It sounds to me like a single thread in fwk_dev is hitting that same single-core limit and simply can't go any faster.
Supposedly, the warp jump between VSs will accelerate inter-VS traffic as much as possible, but there may well be limitations that cause F2F handling, which obviously will not help the CPU. There have also been cases in the past where the Virtual Switch improperly floods all inter-VS traffic to all VSs instead of sending it only to the right one; these issues were supposedly fixed awhile back but may have reared their ugly head again now that UPPAK is on the scene, which is enabled by default on your Quantum Force 29000 I believe: sk175113: Traffic latency when it passes through a Virtual Switch (VSW)
Other kernel variables that might be significant are cphwd_routing_interval and enable_calc_route_wrp_jump and sim_warp_jump_strict_mac.
If you are in UPPAK mode on your 29000, it might be interesting to revert to KPPAK mode and see what happens with your VSwitch performance.
First of all, thank you all for sharing your insights. I had already tested most of the points mentioned earlier, and they confirm what I had suspected.
It seems that the WRP interface, as well as the virtual switches, are limited to running on a single CPU core. As a result, throughput rates of around 4–5 Gbps per virtual switch instance appear to be normal under these conditions.
The challenge I am facing is that I am running two virtual systems, each of which makes use of 32 CPU cores. When using physical interfaces, I can achieve throughput rates of approximately 80 Gbps without major issues.
However, in order to connect these two virtual systems with a dedicated transfer network, I introduced a virtual switch. For this connection, I require a throughput in the range of 50–60 Gbps. Unfortunately, with virtual switches, such throughput is far out of reach given the current architecture and CPU core limitations.
This discrepancy highlights a significant bottleneck in scenarios where virtualized infrastructures are expected to handle very high data transfer rates. Addressing this limitation—either through multi-core support in virtual switching or alternative approaches to interconnectivity—will be crucial for enabling high-performance virtualized networking environments in the future.
Maybe I am overlooking something in this design, which is why I am not achieving higher throughput rates.
@_Val_, @PhoneBoy
Perhaps the R&D team at Check Point could provide some insights on this.
The question is:
What maximum throughput can virtual switches versus WRP interfaces handle in VSX?
Otherwise, I will open a ticket on the topic as an alternative.
What version/JHF level?
@PhoneBoy
We use R81.20 JHF 105.
I've seen similar throughput observations with VSwitches. When switching into the VSW at the cli and running fw ctl multik stat I noted 1 kernel instance. So if there is a way to assign multiple kernels this would likely help with throughput (not 100% on this but it would make sense).
Also I'm not sure if also applies to R82.x
Given this is most likely in Legacy VSX code, I doubt R82 will address the issue.
I suspect this will require VSnext.
We also have a VSwitch environment as a large university hospital. It's very convenient that we don't have to deal with a multitude of physical interfaces to route traffic between VSX instances, but can use VSwitches instead.
However, we have users (mainly scientists) who occasionally complain that the data rates of their large transfers are poor. This seemed strange in our 10 Gbit firewall environment. But your post prompted me to take some action, because we have the same problem: we never achieve more than ~4.7 Gbit/s via the VSwitch.
Therefore, we are also curious whether this limitation can be changed in some way or whether we need to reconsider the use of VSwitches and switch to physical interfaces instead 😕
(R81.20, Take 105, CheckPoint 7000)
@HeikoAnkenbrand did you have the possibility to do some tests with Maestro Fast Forward enabled connections ? I know that's not a solution but nice to know if this allows more throughput.
Thanks @Wolfgang for your tip!
I also make frequent use of Fast Forward "R81.20 - Performance Tuning Tip - Maestro Fastforward " in Maestro environments, and while it’s helpful.
The main limitation I see is the throughput of the VSX virtual switches. In all environments I’ve tested, this is around 4–5 Gbps, which becomes a real challenge for very high-performance virtual systems in the 100 Gbps range.
@PhoneBoy:
The question is:
What is Check Point’s recommendation here?
Maybe R&D can provide some guidance on this.
Do we know when R&D will respond?
A statement from R&D would be very informative.
Short of testing R82 / VSNext I envisage such a statement will only come via a formal solution center request (via SE / local office).
I will ask internally and share what I learn but this is not a replacement for the correct process.
Don't be angry with me, but this is something that belongs in the manual. When I'm planning an environment for a customer, I can't ask local SE questions and wait a long time for answers.
Could you please provide information on the throughput rates we can expect with virtual switches?
Preferably a statement for classic VSX and VSNext.
I currently assume a classic VSX of approx. 4-5 GBps. Anything else I would build at the customer's site using my own physical interface with, for example, 100 GBps transceivers.
Leaderboard
Epsum factorial non deposit quid pro quo hic escorol.
User | Count |
---|---|
17 | |
9 | |
6 | |
5 | |
4 | |
4 | |
3 | |
3 | |
2 | |
2 |
Wed 03 Sep 2025 @ 11:00 AM (SGT)
Deep Dive APAC: Troubleshooting 101 for Quantum Security GatewaysThu 04 Sep 2025 @ 10:00 AM (CEST)
CheckMates Live BeLux: External Risk Management for DummiesWed 10 Sep 2025 @ 11:00 AM (CEST)
Effortless Web Application & API Security with AI-Powered WAF, an intro to CloudGuard WAFWed 10 Sep 2025 @ 11:00 AM (EDT)
Quantum Spark Management Unleashed: Hands-On TechTalk for MSPs Managing SMB NetworksWed 03 Sep 2025 @ 11:00 AM (SGT)
Deep Dive APAC: Troubleshooting 101 for Quantum Security GatewaysThu 04 Sep 2025 @ 10:00 AM (CEST)
CheckMates Live BeLux: External Risk Management for DummiesWed 10 Sep 2025 @ 11:00 AM (EDT)
Quantum Spark Management Unleashed: Hands-On TechTalk for MSPs Managing SMB NetworksAbout CheckMates
Learn Check Point
Advanced Learning
YOU DESERVE THE BEST SECURITY