Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
mcguppy
Participant
Jump to solution

limited througput

Hi community

 

I have some througput problems with several Checkpoint Quantum Spark 1800 running R80.20.40.

For the Internal networks I am using the 10Gbit/s DMZ interface in trunk mode and put all the VLANs onto this trunk. The DMZ interface is connected to a datacenterswitch 10Gbit/s port.

The WAN Port is connected to the same switch but with 1Gbit/s as of the speed on the FW is 1 Gbit/s only.

The HCI server platform is connected to the same switch with 25Gbit/s ports (each host).

I am doing performance tests with iperf.

Having 2 VMs in the same subnet, I get a throughput of 21Gbit/s in a 10sec measurement in both directions.

Moving 1 VM into another subnet, so the traffic has to pass the FW, I only get about 350Mbit/s in a 10 sec measurement, even then links to the FW is 10Gbit/s and there is nearly no other traffic on this DMZ interface.

The same picture is, when testing with a client connected directly in the same subnet than the FW WAN Interface and testing over a natting to the VM on the HCI platform acting as iperf server. I only get about 350Mbit/s ore even less throughput.

I am using always the same VM as iperf server, which is able to handle even 21Gbit/s like seen in the test within the iperf server and client in the same subnet.

I checked the datasheet of Checkpoint Quantum Spark 1800 and the values of security features throughput are much higher than the 350Mbit/s I am seeing.

What I am doing wrong or what is limiting this throughput. The security settings in the FW are most set to the default values, except the access policy control I had to change from “Standard” to “Strict”.

How can I find the bottleneck here?

 

Thank you for your help.

Kind regards, Stefan

0 Kudos
1 Solution

Accepted Solutions
PhoneBoy
Admin
Admin

I believe iperf by default will use a single stream.
When we do performance tests, we do it with multiple streams, simulating multiple users.
A single heavy stream is commonly referred to as an elephant flow.
Due to how CoreXL and SecureXL work, they will be limited in throughput compared to the data sheet numbers.

On our regular gateways, we have a technology called HyperFlow (added in R81.20) that improves throughput in these cases.
SMB gateways do not yet have this feature.

View solution in original post

0 Kudos
5 Replies
Chris_Atkinson
Employee Employee
Employee

On the surface it seems like other areas have been proportionally better dimensioned than the firewall.

Have you attempted the test with multiple parallel threads/connections?

CCSM R77/R80/ELITE
0 Kudos
G_W_Albrecht
Legend Legend
Legend

Which blades are active ?

CCSP - CCSE / CCTE / CTPS / CCME / CCSM Elite / SMB Specialist
0 Kudos
Swiftyyyy
Advisor

iperf is often not the greatest thing to use for testing as (by default) a single transport stream is used.

What I find provides a much more interesting figure in most cases is a file transfer via. SMB.
As SMB is multithreaded, it'll utilize multiple connection streams and you'll very likely get a much nicer (and realistic) number.

0 Kudos
PhoneBoy
Admin
Admin

I believe iperf by default will use a single stream.
When we do performance tests, we do it with multiple streams, simulating multiple users.
A single heavy stream is commonly referred to as an elephant flow.
Due to how CoreXL and SecureXL work, they will be limited in throughput compared to the data sheet numbers.

On our regular gateways, we have a technology called HyperFlow (added in R81.20) that improves throughput in these cases.
SMB gateways do not yet have this feature.

0 Kudos
mcguppy
Participant

Thank you all for your help.

Testing with iperf and parallel streams indeed gives me much better results (nearly 900 Mbit/s over the WAN IF and natting).

Also copying files by winSCP was much better.

Thanks again for your help.

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events