You have posed an interesting question, and the answer on the gateway side depends on a number of factors. Most logs are generated in the INSPECT/Worker instances, and then shuttled to the fwd /fw_full process on the firewall which handles the transport of logs to the SMS/Log Server via SIC on TCP/257.
Let's start on a firewall running in the traditional "kernel mode", where the INSPECT engines (Firewall Workers) are located in the kernel of the Gaia OS. In this case the Dispatcher and INSPECT instances are all running in the kernel and have the ultimate power of preemption over the CPU, so if they are very busy or under attack the fwd/fw_full process can be starved for CPU and not be able to transport logs in a timely fashion. This is normally indicated by a "FW-1: Log buffer is full FW-1: lost N log/trap messages" log entry in /var/log/messages, as the fwd/fw_full process could not empty the fw_log_bufsize buffer fast enough before it was overflowed by logs pouring into it from the kernel. This buffer size on the gateway can be increased which may help with short bursts of logs, but if there is a prolonged period of an extremely high number of logs, even the increased buffer can overflow. You can read more about increasing the buffer here:
sk52100 - /var/log/messages shows 'log buffer is full'
The newer firewalls (like Quantum) run in User Space Firewall (USFW) mode, where the INSPECT instances are implemented in process space as fwk processes (SecureXL/sim is still located in kernel space). In this case the fwd/fw_full process is on more of an equal footing with the INSPECT instances as far as CPU access since they are both processes, so log buffer overflows are a bit less likely to happen.
However on systems with more than 20 total cores and using USFW, fwd is automatically given its own dedicated CPU core to ensure that it has enough CPU to accomplish its mission. I first noticed this on a 28-core Quantum firewall that had a default split of 4/23 which made no sense. Upon investigation it was revealed that one of the cores that would otherwise be assigned as an INSPECT instance was affined to fwd/fw_full by default.
So after that long-winded explanation, on the gateway it really comes down to the fwd/fw_full process and its ability to access CPU resources in a timely fashion. This process is rather old and I believe is single-threaded; the log transport mechanism for Check Point gateways has really not changed much over the years.
The SMS side also uses the fwd process to receive the gateway logs and write them to disk; the issue here does not tend to be CPU but the speed of the disk I/O path. If the disk is overwhelmed (especially in VMWare or Cloud environments) this can lead to sizable delays in getting the logs written to disk, and even SOLR indexing issues that keep new logs from appearing in the SmartConsole in a timely fashion.
Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com