OK now we are getting somewhere:
0) No carrier transitions, at least on the eth1-08 interface. Looks like bond1 consists of eth1-05 and eth1-06, while bond2 is eth1-07 and eth1-08. RX-DRP percentage is far less than the target of 0.1% though.
1) Looks like you only provided the ethtool stats for eth1-08, but is showing 99% misses/drops. There were 83 overruns probably caused by the ring buffer being full thus causing backpressure into the NIC buffer which was then overrun a few times. I would expect the other interfaces are similar, see #3 below...
2) Inbound RX balancing of the bonds looks good, but TX numbers are far enough apart that you probably should set L3/L4 hash balancing if you haven't already, although you don't seem to be having any problems on the TX side.
3) Given this is a 13800 with 20 cores, you are almost certainly running with a default split of 2/18 (4/36 if SMT enabled) for CoreXL allocations. So only two physical SND/IRQ cores are emptying the ring buffers of four very busy 1Gbps interfaces and they are not keeping up and causing drops/misses; if a large percentage of traffic is accelerated (use fwaccel stats -s to check) those 2 cores will be getting absolutely killed and seriously crimp the throughput of the box.
Would strongly recommend adjusting CoreXL split via cpconfig. If "Accelerated pkts/Total pkts" is >50% as reported by fwaccel stats -s reduce number of kernel instances from 18 to 14 to allocate 6 SND/IRQ cores; you may also want to disable SMT/Hyperthreading in this instance.
If "Accelerated pkts/Total pkts" is <50% as reported by fwaccel stats -s reduce number of kernel instances from 18/36 to 16/32 to allocate more SND/IRQ cores and leave SMT/Hyperthreading on.
--
Second Edition of my "Max Power" Firewall Book
Now Available at http://www.maxpowerfirewalls.com
Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com