@Timothy_Hall
First of all, thank you all for sharing your insights. I had already tested most of the points mentioned earlier, and they confirm what I had suspected.
It seems that the WRP interface, as well as the virtual switches, are limited to running on a single CPU core. As a result, throughput rates of around 4–5 Gbps per virtual switch instance appear to be normal under these conditions.
The challenge I am facing is that I am running two virtual systems, each of which makes use of 32 CPU cores. When using physical interfaces, I can achieve throughput rates of approximately 80 Gbps without major issues.
However, in order to connect these two virtual systems with a dedicated transfer network, I introduced a virtual switch. For this connection, I require a throughput in the range of 50–60 Gbps. Unfortunately, with virtual switches, such throughput is far out of reach given the current architecture and CPU core limitations.
This discrepancy highlights a significant bottleneck in scenarios where virtualized infrastructures are expected to handle very high data transfer rates. Addressing this limitation—either through multi-core support in virtual switching or alternative approaches to interconnectivity—will be crucial for enabling high-performance virtualized networking environments in the future.
Maybe I am overlooking something in this design, which is why I am not achieving higher throughput rates.
@_Val_, @PhoneBoy
Perhaps the R&D team at Check Point could provide some insights on this.
The question is:
What maximum throughput can virtual switches versus WRP interfaces handle in VSX?
Otherwise, I will open a ticket on the topic as an alternative.
➜ CCSM Elite, CCME, CCTE ➜ www.checkpoint.tips