Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Juergen_Blumens
Explorer

Maximum reachable bandwidth 3800

Hello,

We have an IPsec tunnel between two sites, each with a 3800. The available Internet bandwidth is 1 GBit/sec, and the latency is about 12 msec. There is an IP test node behind the firewall on both sides. We can't achive more than 300 Mbit/sec between the two test nodes over the tunnel. The encryption is AES 256, but even if we set the encryption to none, it stays at this maximum bandwidth. The firewalls have R80.40 JHF 190, all blades (except FW, VPN) are disabled during the tests, exceptions cphwd_medium_path_qid_by_cpu_id = 1
cphwd_medium_path_qid_by_mspi = 0
are set. We do not see a CPU overload.
We ran the download tests with simple http download and also using iperf.
If we connect the test notes directly to the Internet without the Firewalls, we reach the maximum bandwidth of 1 GBit/sec.
If we perform an additional parallel download via a second tunnel to another site with a 3800, the bandwidth doubles!
What is your experience? Is there a maximum bandwidth per connection that is limited by the hardware of the Firewall? Do you have ever seen this magical 300 MBit/sec?

Thanks

0 Kudos
19 Replies
PhoneBoy
Admin
Admin

That's expected behavior for a single heavy "elephant" flow.
This is constrained by the fact only a single core handles the specific flow.

We have started to address this in R81.20 with Hyperflow, which we are doing a Techtalk on next week.
Since you're only using Firewall and VPN, Hyperflow wouldn't help as it only helps Medium Path inspection currently.
However, there are other improvements to VPN that might improve speed somewhat.

0 Kudos
the_rock
Legend
Legend

I have a TAC case with customer at the moment and they get, if lucky, maybe 20% bandwidth speeds through the VPN tunnel (other side is Fortigate). Everything was verified on the other end, Fortinet TAC did bunch of checks and TAC asked us on CP side to change the MSS value. 

I honestly have no idea where TAC guy got the info that 10-20% is the most you would get through the VPN, because to me, I have a hard time believing that. I was thinking there was a way to actually change mss values per vpn community, but does not seem so.

Things CP TAC suggested so far that we did:

-install latest jumbo for R81.10 (though in all fairness, that is a suggestion no matter the problem)

-cluster failover

-try disable sxl

-vpn accel off for that specific tunnel

0 Kudos
PhoneBoy
Admin
Admin

iked being multithreaded in R81.20 should help performance a bit. 
However, VPN traffic not involving other blades will still be constrained to what a single SND core can handle (per flow).

0 Kudos
Chris_Atkinson
Employee Employee
Employee

Generally if outright speed is what you're after you may need to adjust some things, refer:

https://support.checkpoint.com/results/sk/sk73980 

With that said I see you've tried different encryption methods without success. Do you see individual CPU/cores peaking?

CCSM R77/R80/ELITE
0 Kudos
the_rock
Legend
Legend

That was the first thing that initial engineer asked us to do and it did not change anything, it was exact same issue. Not saying its not successful for others, but it was not for us.

0 Kudos
Chris_Atkinson
Employee Employee
Employee

Per @PhoneBoy above were your tests also running a single flow or multiple threads?

CCSM R77/R80/ELITE
0 Kudos
the_rock
Legend
Legend

Multiple.

0 Kudos
Chris_Atkinson
Employee Employee
Employee

Noted, in that case the issue is different than that stated by @Juergen_Blumens here.

CCSM R77/R80/ELITE
the_rock
Legend
Legend

True...every case is different 🙂

0 Kudos
Juergen_Blumens
Explorer

This is the CPU load and the Bandwidth. CPU.png

BW.png

0 Kudos
Timothy_Hall
Champion
Champion

Please see this lengthy thread which hashes out the performance of the 3800 for VPNs:

Check Point CPAP-SG3800 and expected performance l... 

You are almost certainly saturating a single SND core with your fully-accelerated VPN traffic and it simply cannot go any faster.  Unfortunately the 3800 uses a ultra-low voltage processor architecture, whose individual cores are at least 2-3 times slower than Xeon cores.  Intel tries to make up for this by having more cores available (8 in your case) which doesn't help your situation.  I did make some rather unorthodox "last ditch" recommendations in the prior thread that may help you, check them out.

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
Juergen_Blumens
Explorer

Thanks for your reply and the reference to the other thread. We also tried with a 6200 and did not get more bandwidth. Is the architecture comparable to the 3800?

0 Kudos
Chris_Atkinson
Employee Employee
Employee

Yes comparable in that many / multiple parallel TCP or UDP connections are needed for best results.

Recent versions have introduced technologies such as Hyperflow to contend with similar non-VPN scenarios (for systems with +8 CPU cores).

 

 

sk178070: HyperFlow in R81.20 and higher

CCSM R77/R80/ELITE
0 Kudos
Timothy_Hall
Champion
Champion

I don't know the precise CPU type in the 6200, but the 3800 is rated for 2.75Gbps of VPN throughput while the 6200 is rated for 2.57Gbps, so I'd say they are comparable.  I'm assuming these numbers are for all available cores being used simultaneously for VPN, not just one.

As I mentioned earlier the graphs show that VPN traffic is fully saturating a single SND core, and there is no way to spread the traffic of a single tunnel across multiple SND cores that I know of.  Hyperflow does not help with VPN traffic at this time and neither does Lightspeed.  Multi-core VPN only applies on Firewall Worker/Instance cores.

One non-intuitive thing to try: set 3DES for IPSec/Phase 2, measure performance, then set IPSec/Phase 2 for AES-128 and measure again.  The AES-128 speed should be at least double that of 3DES, if not you are bumping up against some other kind of limitation other than firewall CPU.

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
0 Kudos
Bob_Zimmerman
Authority
Authority

3800 uses an Atom C3758 (8c8t, 3.125W/c), while the 6200 uses a Pentium G5400 (2c4t, 29W/c). I would expect a 6200 to perform significantly better with a single traffic flow, since it has nearly ten times the power budget to work with.

0 Kudos
Timothy_Hall
Champion
Champion

I would agree, which makes me think that he is bumping against some other kind of performance limitation, not necessarily on the firewall itself.  The 3DES/AES-128 test I mentioned in a prior post should help reveal what is going on.

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
0 Kudos
Chris_Atkinson
Employee Employee
Employee

A packet capture to see TCP window sizes, MSS (and derived MTU) values would also be helpful perhaps.

Namely to confirm fragmentation has been dealt with by enabling MSS clamping etc.

CCSM R77/R80/ELITE
0 Kudos
Timothy_Hall
Champion
Champion

Even though the 3800 model has 8 cores which is the minimum required to support Hyperflow, I find it interesting that sk178070 was just updated to state that the 3800 model does *not* support Hyperflow.

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
0 Kudos
PhoneBoy
Admin
Admin

Yes, the 3800 does not support Hyperflow due to hardware limitations (not specific to the number of cores available).

0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events