Hi Guys,
I'm less interested in hard drive throughput. That shouldn't be interesting on a gateway. It's just a topic for an SMS or MDS with many log traffic.
The interesting question is, is the network throughput on ESXi with vmxnet3 network drivers and multi-queueing the same as an open server with ixgbe drivers and multi-queueing? Since R80.30 with kernel 3.10 and with R80.40 MQ is supported with vmxnet3 drivers. You can read more about this in this article: R80.40 - Multi Queue on VMWare vmxnet3 drivers
I have now checked the same test as described above in the LAB on a HP DL380 G9.
Installation |
Network throughput |
HP DL380 G9 + VMware vSphere Hypervisor (ESXi) 7.0.0 + virtual gateway R80.40 (kernel 3.10 + multi queueing on two interfaces (ixgbe) + 2 SND + 6 CoreXL) |
8.34* GBit/s |
HP DL380 G9 + Open Server (without ESXi) R80.40 (kernel 3.10 + multi queueing on two interfaces (vmxnet3) + 2 SND + 6 CoreXL) |
8.41* GBit/s |
* The values are not representative. It's just a quick test in the LAB.
The results are the same for me! The question is of course how this looks like with 40GBit+ throughput😀.
MQ settings:
Advantages for the ESXi solution:
- VMWare snapshots are possible
- Multiple firewalls on one hardware (without using VSX)
- Resource distribution e.g. for cores and memory under VMWare
- Independent of Check Point HCL for Open Server
- Backup of the gateway with VMWare backup tools
Disadvantage of this solution:
- Licenses per core is a problem with multiple firewalls
- More CPU utilization by the hypervisor
- No LACP (Bond Interface) possible
At this time I would not recommend the ESXi solution to any customer. But it would be interesting to hear what the Check Point guys have to say about this solution.
➜ CCSM Elite, CCME, CCTE ➜ www.checkpoint.tips