There is a common mantra that packet loss should be avoided at all cost, yet TCP's congestion control algorithm relies on timely dropping of packets when the network is overloaded so that all the different TCP-based connections trying to utilize the congested network link can settle at a stable transfer speed that is as fast as the network will allow. Increasing ring buffer sizes can increase jitter to the point it incurs a kind of "network choppiness" that causes all the TCP streams traversing the firewall to constantly hunt for a stable transfer speed, and they all end up backing off far more than they should in these types of situations:
1) Step down from higher-speed to lower-speed interface (i.e. 10Gbps to 1Gbps) and the lower-speed link is fully utilized
2) Traffic from multiple interfaces all trying to pile into a single fully-utilized interface
3) Traffic from two equal-speed interfaces (i.e. 1Gbps) running with high utilization, yet upstream of one of the interfaces there is substantially less bandwidth, like a 100Mbps Internet connection
Note that two equal-speed interfaces cannot have these issues occur assuming there is truly the amount of bandwidth available upstream of the two interfaces that matches their link speed. In my book I tell the sordid tale of "Screamer" and "Slowpoke", two TCP-based streams competing for a fully-utilized firewall interface that has had its ring buffers increased to the maximum size, thus causing jitter to increase by 16X. Increasing ring buffer sizes is a last resort, generally more SND/IRQ cores should be allocated first and then perhaps use Multi-Queue. I'd suggest reading the Wikipedia article about Bufferbloat.
--
Second Edition of my "Max Power" Firewall Book
Now Available at http://www.maxpowerfirewalls.com
Gaia 4.18 (R82) Immersion Tips, Tricks, & Best Practices
Self-Guided Video Series Coming Soon