Who rated this post

cancel
Showing results for 
Search instead for 
Did you mean: 
Timothy_Hall
Legend Legend
Legend

Core 1 which you referenced in your last screenshot along with sk181860 is generally a Firewall Worker Instance on an SMT 16-core gateway and not an SND unless Dynamic Split had added another SND when you took that particular screenshot.

So in looking through everything here is what I see:

1) The act of changing the ring buffer size does reinitialize the interface very briefly and I suspect that is what influenced the speed to increase, not the buffer size increase itself.  From expert mode try ifdown ethX;ifup ethX when you are seeing the slow performance, does that suddenly speed it up for awhile?  If it does that could suggest some kind of issue with the SND code, as Multi-Queue seems to be doing a very good job of spreading traffic around the SND cores as much as possible, but the interface reset will be causing MQ to reinitialize there too so it can't be ruled out.  igb (which is implementing MQ here) is a rather old driver (5.3.5.20), and will tentatively be updating to 5.12.3 for R82.  

2) Hyperflow/pipelining is working well on your system, but SMB is not eligible for it until (hopefully) R82.  So an SMB elephant flow will be stuck on one core and top out as far as performance, and there is not much you can do about it.

3) Make sure the Anti-Virus blade is NOT scanning SMB traffic in the relevant TP profile for the SMB traffic.  This was mentioned earlier in the thread.  fw stat -b AMW on its "files:" line can be used to check this.

4) There has also been a longstanding problem with SMB/CIFS traffic ending up in the Medium Path although it should be in the fastpath.  This was supposed to be fixed in R81.20 Jumbo HFA Take 43+ which you have.  You may need to adjust your policy to ensure that APCL/Threat Prevention is not trying to deep inspect SMB traffic, failing that try fast_accel'ing the SMB traffic into the fastpath.  sk156672: SecureXL Fast Accelerator (fw fast_accel) for R80.20 and above The command fw ctl multik print_heavy_conn can help you find the attributes of SMB elephant flows needed to set up a fast_accel rule. 

5) You have a very high percentage of fully-accelerated traffic (Accelerated pkts) yet really only one physical core acting as an SND with SMT enabled on core 0 and its sibling 8.  Dynamic Split can add more SNDs if the Firewall Worker Instances aren't very busy, but given the features you have enabled I'd say they are probably busy quite a bit of the time.   In my experience once SND utilization goes north of 75% with SMT enabled, performance starts to degrade quite markedly as the two SND instances "bang into" each other trying to reach the physical core; Firewall Worker instances do benefit about 30% from SMT, but SNDs under heavy load most definitely do not.  This may be one of those cases where turning off SMT and going back to 8 full cores and its initial 2/6 split (subject to change by Dynamic Split of course) would probably be a good idea.  This was also mentioned earlier in the thread and is your final option if the four ones above don't solve the issue.

6) There are some network errors including rather rare TX errors, but there are so few compared against the total amount of frames it is not worth worrying about at this point.

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
(1)
Who rated this post