- CheckMates
- :
- Products
- :
- Quantum
- :
- Security Gateways
- :
- Re: Trafffic generated by Load Tester is dropped o...
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Are you a member of CheckMates?
×- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Trafffic generated by Load Tester is dropped on Checkpoint Gateway
We are doing some performance test on the checkpoint gateway. During the test we noticed majority of the traffic being dropped by Chechpoint gateway when we trigger the load with imix traffic the traffic is being dropped and tge zdebug output shows traffic being dropped due to an CPU instance spike.
The traffic is basically IPP253.
The gateway model is SG16200
On top of this we tried to pump the load with default frame size-1500 byte and 100 streams. There was no traffic drop observed
Have anyone encountered this? We need to do the load test with varying frames sizes as well.
Is there any setting to be done on the gateways to allow varying frame size to be allowed through the gateway without dropping?
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
IP Protocol 253? That's your problem right there.
SecureXL only supports TCP and UDP traffic.
Everything else goes F2F...and it sounds like you're hitting the limit of what a CPU can handle somewhere.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
CPU spikes do not cause extra performance and is a tool to log spikes to data for troubleshooting.
This should not drop traffic.
If you stress a system enough it is most likely it will drop traffic. If you want to get more performance from the system start with this sk
https://support.checkpoint.com/results/sk/sk167553
For now I don't know what has been tested and how much data we are talking about. Also version is not known. Tested only one interface or more? What blades are enabled etc.
If you like this post please give a thumbs up(kudo)! 🙂
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
We are pumping 10Gig of iMIX traffic into the firewalls.
We only have firewall blade enabled.
The current running version is R81.20
I tried test test on 2 interfaces and both are having the same results.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
As stated more information regarding the version/hotfix level, configuration and enabled blades is required in order to optimize the performance...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
We are running on R81.20 JFH take 53. Only firewall blade is enabled.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Do we know the dsetination port in question? If so, then you can easily do zdebug and grep for that port number
example, port 4434
fw ctl zdebug + drop | grep "4434"
Andy
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
From the zdebug drop we are seeing IPP-253 traffic , and the reason mentioned is cpu spike on instance 41.
We are pumping IP Protocol 253 traffic, which will use range of TCP/UDP ports from 1-65535,
Also when we pump the traffic with frame size of 1500 size there is no traffic drop observed, however when i vary the the frame size like trying to pump traffic with jumbo frame e.g, frame size 1518 the gateway starts dropping the traffic.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
You may want to investigate more with TAC on this.
Andy
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Have you configured the higher interface MTU / support for jumbo frames?
How many source & destination addresses are involved in your test and are you using bonds with Layer3+4 hash?
What does cpview show you about the CPU load distribution...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I haven't tried to increase the MTU on the interface level. From the cpview i can see 3 CPU are reaching 100%. And the IP stream is initiated from 100 IPs, actually with 1 IP address most of the traffic was being dropped even with default frame size. So when we increased the IP stream to 100 IP addresses then the traffic started going through the firewalls.
Yes I am using bond interfaces and have also modified the load sharing mechanism to Layer3+4 hash hence the load sharing on the bond interfaces is confirmed.
Even if I increase the MTU size can I check how will the firewall behave for lower byte frame sizes like 68byte frame? Because the behavior is the same when I vary the frame size different than that of default.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Its sort of catch 22 situation...higher mtu means larger packet size, but fewer of them and smaller mtu means more packet smaller in size, so its really hard to say for sure how fw would behave unless you make a change.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Sure, let me perform the change on the MTU size on the firewall interfaces and will keep you posted once have performed the test. Thanks for your valuable assistance.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Sure! Though keep in mind, this should be done off normal business hours, as generally (I say generally, though there are exceptions to every rule), increasing MTU size would cause traffic latency.
Keep us posted how it goes.
Andy
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Noted on the traffic latency part. We are doing this exercise in green field.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Increasing the MTU on the interface level helped me to allow Jumbo frames, the smaller frames were basically dropped by the firewall due to the default behaviour of dropping the iMIX traffic which is IPP253, Modifying the traffic stream to TCP&UDP the smaller frames of even 78bytes were passed succefully
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thats the thing with mtu size, you never really know the behavior until you test it.
Andy
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Just to help confirm a baseline, can you please share the output of the following in expert mode:
fwaccel stats -s
dynamic_balancing -p
mq_mng --show -v
(I would recommend engaging with your local CP Account Team regarding your performance testing requirements if not already.)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
We have managed to find the root cause and is described in this thread. We had SecureXL enabled all the time while we performing this load test.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Pleased to hear it, the first of the above commands would have helped highlight the high F2F (slowpath) profile indicated by @PhoneBoy attributed to IPP253.
Per sk32578:
Non-TCP / Non-UDP / Multicast traffic | Such traffic is not accelerated (traffic goes through Firewall path). (starting in R76, Multicast traffic is accelerated, except IPv6 Multicast ) |
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
IP Protocol 253? That's your problem right there.
SecureXL only supports TCP and UDP traffic.
Everything else goes F2F...and it sounds like you're hitting the limit of what a CPU can handle somewhere.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Also stated this before, system is just loaded. I assume the same that 1 CPU is handeling all the traffic. This can be checked with top or cpview. Wondering how much mbit you see in cpview for the relevant interface.
If you like this post please give a thumbs up(kudo)! 🙂
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
multiple CPU instances were active during the load distribution, though the spike was noticed on 3 of the CPU instances. We tried to pump 50%, 70% and 100% of load on the 10Gig interfaces. All resulting with the same behavior.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks for the advise @PhoneBoy .
Yes, IP protocol 253 was the culprit. After we modify the profile on the load tester we changed the iMIX traffic to specif TCP & UDP streams and since then no further significant drops were observed. In addition we also increased the MTU size on the interface level to 9000 so that no jumbo frames are being dropped.