Like everyone else said it kind of depends, assuming that all CPUs are 100% (so both SND/IRQ and Firewall Worker cores are fully saturated) the buffers between the different components would start to fill up, and if they overflow packets would either start to be lost, or in some cases start causing a "backup" into the prior buffer component which could then overflow. Note that while many of the buffers listed below can have their sizes increased from the default, generally doing so is NOT desirable as it is merely addressing a symptom of the problem (queue overflows), rather than dealing with the actual cause (queue not being emptied fast enough by the receiving component).
As a thought experiment, here is where I'd say the buffer points are that could overflow for a packet traversing a very busy firewall. I'm sure I'm missing a bunch but these are the ones I can think of off the top of my head:
- NIC hardware buffer for receiving frames
- Gaia RX ring buffer (rx-ringsize)
- CoreXL Firewall Worker input queue (enqueue) on firewall worker from SND (fwmultik_input_queue_len) - This queue can be actively managed by Priority Queues when utilization hits 100% as noted earlier
- CoreXL Firewall Worker internal buffers between chain modules and such (I'm assuming...)
- CoreXL Firewall Worker dequeue back to SND which probably has an input buffer (I'm assuming, can't find reference to this)
- Gaia TX ring buffer (tx-ringsize, sim_requeue_enabled)
- NIC hardware buffer for transmission
Only the CoreXL Firewall Worker input queue has active management available via Priority Queues, all other queues are just FIFO to my knowledge. If the RX ring buffer is full and the NIC tries to put a new frame into it, certain NIC/driver combos will simply hold the frame and try again waiting for a ring buffer slot to open instead of just dropping it with a ++RX-DRP. However this "hold" behavior can cause a backup into the NIC receive buffer, thus causing it to overflow (++RX-OVR) but the actual cause is a full RX ring buffer. This specific "backup" condition is indicated by both RX-DRP and RX-OVR being incremented together in "lock-step" as mentioned in my book. Most problems with loss tend to occur on the RX side with the first three components of the above list when bottlenecks occur, which then severely limits the speed at which packets can be pumped into the TX side components (last three items on the above list), so they don't tend to have problems in this area.
However buffering problems on the TX side are not completely unheard of, see sk75100: The 'ifconfig' / 'netstat' commands show that "TX drops" counter on the interface grows rap....
Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com