<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: High CPU Load while packets processed through slow path in General Topics</title>
    <link>https://community.checkpoint.com/t5/General-Topics/High-CPU-Load-while-packets-processed-through-slow-path/m-p/81212#M16409</link>
    <description>&lt;P&gt;Item 1:&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;we moved to R80 in February and since then from time to time we receive alert Mail from our SMS (R80.30) that it lost connection to the active gateway of our Check Point Cluster (R80.10).&amp;nbsp; The result of my investigation was a high CPU Load (100%) on all cores due to high load on the fw_worker processes across this period.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;F2F/slowpath is 6.3M inbound and 33.9M outbound by one measurement and 12.4M inbound and 24.7 M outbound in another measurement.&amp;nbsp; Traffic directly to and from the gateway (including logging) always goes F2F, and these imbalances and the behavior described above screams excessive logging rate, and fwd is starving out when the 3 worker cores get busy, not sure if that is just a symptom or cause.&amp;nbsp; Would suggest trying to reduce logging rate, focus on disabling logging for DNS, NetBIOS and other UDP-based protocols, perhaps HTTP connections as well.&lt;/P&gt;
&lt;P&gt;Item 2: There is only one SND core so Multi-Queue will not help.&lt;/P&gt;
&lt;P&gt;Item 3: Please provide output of &lt;STRONG&gt;fwaccel stat&lt;/STRONG&gt;, packet and connections acceleration looks good but if NAT templates are disabled it will cause a lot of F2F and worker overhead if you have a high new connection rate, see next item.&amp;nbsp; Would caution against enabling NAT templates unless latest Jumbo HFA has been installed on your R80.10 gateway, early implementations of NAT Templates caused SecureXL problems when enabled.&lt;/P&gt;
&lt;P&gt;Item 4: During a high CPU period please provide new connection rate, on &lt;STRONG&gt;cpview&lt;/STRONG&gt; screen this can be seen on the Overview page, scroll down to the Network section make sure to show the whole Network section.&lt;/P&gt;
&lt;P&gt;Item 5: Your single SND core seems to be doing OK so there is probably not much RX-DRP, but your bandwidth numbers under load are suspiciously close to exactly 1Gbit as Heiko noticed, if you are pushing a 1Gbps interface that close to its theoretical limit you are probably racking up overruns (RX-OVR) like crazy.&amp;nbsp; Please provide full output of &lt;STRONG&gt;netstat -ni&lt;/STRONG&gt; for analysis.&lt;/P&gt;
&lt;P&gt;Item 6: You're not using SHA-384 for any of your VPN's are you?&amp;nbsp; Looks like most of your VPN traffic is fully accelerated but using SHA-384 will cause VPN traffic to go F2F.&lt;/P&gt;
&lt;P&gt;Item 7: Even under load it looks like your firewall workers are pretty evenly balanced by the Dynamic Dispatcher, so I doubt it is an elephant flow issue but still worth investigating as Heiko mentioned.&lt;/P&gt;
&lt;P&gt;Item 8: High load can be caused by an overloaded sync network in a cluster, please provide output of &lt;STRONG&gt;fw ctl pstat&lt;/STRONG&gt;.&amp;nbsp; Might need to do selective synchronization for protocols mentioned in Item 1, as combo of suddenly high connection rates and having to state sync them too is a nasty double whammy on the workers.&lt;/P&gt;
&lt;P&gt;Item 9: Doubtful you are having memory issues, but please provide output of &lt;STRONG&gt;free -m&lt;/STRONG&gt; anyway.&lt;/P&gt;
&lt;P&gt;Item 10: You could be getting flooded with fragmented packets during a high CPU event (frags always go F2F in your version), please provide output of &lt;STRONG&gt;fwaccel stats -p&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Tue, 07 Apr 2020 22:06:10 GMT</pubDate>
    <dc:creator>Timothy_Hall</dc:creator>
    <dc:date>2020-04-07T22:06:10Z</dc:date>
    <item>
      <title>High CPU Load while packets processed through slow path</title>
      <link>https://community.checkpoint.com/t5/General-Topics/High-CPU-Load-while-packets-processed-through-slow-path/m-p/81122#M16390</link>
      <description>&lt;P&gt;Hello community,&lt;/P&gt;&lt;P&gt;we moved to R80 in February and since then from time to time we receive alert Mail from our SMS (R80.30) that it lost connection to the active gateway of our Check Point Cluster (R80.10).&lt;BR /&gt;The result of my investigation was a high CPU Load (100%) on all cores due to high load on the fw_worker processes across this period.&lt;/P&gt;&lt;P&gt;This issue had impact to all parts of the network which are routed through the firewall. We had increased latency in the network and our SMS couldn't get data from the fw node. That is why we have a brake in the graph of SmartView Monitor and had to investigate on the affected node.&lt;/P&gt;&lt;P&gt;In the affected period I recognized an increased amount of inbound packets/sec on our external interface with CPVIEW history. Futhermore I saw also a rise of packets/sec handled by slow path (FW). The amount of inbound packets on the external interface and packets handled by slow path are quite close.&lt;/P&gt;&lt;P&gt;I created a CPInfo file with export of the CPVIEW History for visualization and compared the graph of fw_inbound packets and system_performance and they correlate.&lt;BR /&gt;&lt;BR /&gt;That leads me to the conclusion that the packets has been inspected by default inspection but i'm not able to find information about inspected packets in the logging and although dynamic dispatcher is active, in CPVIEW no Top-Connections are listet under CPVIEW.CPU.Top-Connections.&lt;/P&gt;&lt;P&gt;I'm not sure how to find the connections that was responsible to that behaviour.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Here some values of CPVIEW of the second before and while high CPU load.&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;CPVIEW.Overview 25Mar2020 &lt;FONT color="#339966"&gt;14:17:46 &lt;/FONT&gt;&lt;/STRONG&gt;&lt;BR /&gt;|---------------------------------------|&lt;BR /&gt;| Num of CPUs: 6 |&lt;BR /&gt;| CPU Used |&lt;BR /&gt;| 2 45% |&lt;BR /&gt;| 1 44% |&lt;BR /&gt;| 3 40% |&lt;BR /&gt;|---------------------------------------|&lt;BR /&gt;| CPU: |&lt;BR /&gt;| CPU User System Idle I/O wait Interrupts |&lt;BR /&gt;| 0 0% 25% 75% 0% 41,234 |&lt;BR /&gt;| 1 12% 32% 56% 0% 41,234 |&lt;BR /&gt;| 2 17% 28% 55% 0% 41,234 |&lt;BR /&gt;| 3 13% 27% 60% 0% 41,234 |&lt;BR /&gt;| 4 0% 1% 99% 0% 41,234 |&lt;BR /&gt;| 5 0% 0% 100% 0% 41,234&lt;BR /&gt;|---------------------------------------|&lt;BR /&gt;| Traffic Rate: |&lt;BR /&gt;| Total&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; FW&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp; PXL&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; SecureXL |&lt;BR /&gt;| Inbound packets/sec 155K &amp;nbsp; &amp;nbsp; 9,255&amp;nbsp;&amp;nbsp; &amp;nbsp; 1,432&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 145K |&lt;BR /&gt;| Outbound packets/sec 156K&amp;nbsp; 9,805&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 1,432&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 145K |&lt;BR /&gt;| Inbound bits/sec 958M&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 6,380K&amp;nbsp;&amp;nbsp; 10,354K&amp;nbsp;&amp;nbsp; 941M |&lt;BR /&gt;| Outbound bits/sec 1,002M&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 33,917K 10,537K 958M |&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;CPVIEW.Overview 25Mar2020 &lt;FONT color="#FF0000"&gt;14:17:47&lt;/FONT&gt;&lt;/STRONG&gt;&lt;BR /&gt;|---------------------------------------|&lt;BR /&gt;| Num of CPUs: 6 |&lt;BR /&gt;| CPU Used |&lt;BR /&gt;| 1 89% |&lt;BR /&gt;| 2 89% |&lt;BR /&gt;| 3 89% |&lt;BR /&gt;|---------------------------------------|&lt;BR /&gt;| CPU: |&lt;BR /&gt;| CPU User System Idle I/O wait Interrupts |&lt;BR /&gt;| 0 0% 29% 71% 0% 39,075 |&lt;BR /&gt;| 1 22% 67% 10% 0% 39,075 |&lt;BR /&gt;| 2 4% 85% 11% 0% 39,075 |&lt;BR /&gt;| 3 3% 86% 11% 0% 39,075 |&lt;BR /&gt;| 4 0% 0% 100% 0% 39,075 |&lt;BR /&gt;| 5 0% 0% 100% 0% 39,075 |&lt;BR /&gt;|---------------------------------------|&lt;BR /&gt;| Traffic Rate: |&lt;BR /&gt;| Total&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; FW&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp; PXL&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; SecureXL |&lt;BR /&gt;| Inbound packets/sec 182K&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 30,459&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 1,032&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 150K |&lt;BR /&gt;| Outbound packets/sec 157K&amp;nbsp;&amp;nbsp; 5,877&amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp; 1,032&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 150K |&lt;BR /&gt;| Inbound bits/sec 1,010M&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 12,437K&amp;nbsp;&amp;nbsp; 7,394K&amp;nbsp; 991M |&lt;BR /&gt;| Outbound bits/sec 1,040M &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 24,715K&amp;nbsp; 7,526K 1,008M |&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I also attached the graph of CPU-Load (one core), fw_inbound, RX on external interface.&lt;BR /&gt;I did a manual failover to see if the cpu load is just an issue of one node. You can see it in the graph.&lt;BR /&gt;&lt;BR /&gt;The load suddenly went down at around 13:15 at the 26 of march.&lt;/P&gt;&lt;P&gt;I hope you have ideas for further investigation or preventing this. I thought about creating an own inspection profile and set the most actions to inactive.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks in advance and best regards. Stay healthy&lt;/P&gt;&lt;P&gt;Martin Reppich&lt;BR /&gt;System Administrator&lt;BR /&gt;Helmholtz-Zentrum Potsdam&lt;BR /&gt;Deutsches GeoForschungsZentrum GFZ&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 07 Apr 2020 12:18:45 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/High-CPU-Load-while-packets-processed-through-slow-path/m-p/81122#M16390</guid>
      <dc:creator>Martin_Reppich</dc:creator>
      <dc:date>2020-04-07T12:18:45Z</dc:date>
    </item>
    <item>
      <title>Re: High CPU Load while packets processed through slow path</title>
      <link>https://community.checkpoint.com/t5/General-Topics/High-CPU-Load-while-packets-processed-through-slow-path/m-p/81151#M16391</link>
      <description>&lt;P&gt;&lt;a href="https://community.checkpoint.com/t5/user/viewprofilepage/user-id/13373"&gt;@Martin_Reppich&lt;/a&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;which hardware are you running, is it an open server?&lt;/P&gt;
&lt;P&gt;Wolfgang&lt;/P&gt;</description>
      <pubDate>Tue, 07 Apr 2020 14:59:10 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/High-CPU-Load-while-packets-processed-through-slow-path/m-p/81151#M16391</guid>
      <dc:creator>Wolfgang</dc:creator>
      <dc:date>2020-04-07T14:59:10Z</dc:date>
    </item>
    <item>
      <title>Re: High CPU Load while packets processed through slow path</title>
      <link>https://community.checkpoint.com/t5/General-Topics/High-CPU-Load-while-packets-processed-through-slow-path/m-p/81159#M16393</link>
      <description>I think CoreXL balances the per connection, so a high CPU usage on all cores would rule out one elephant connection causing the problem. Something like a parallel download of windows updates maybe, but i can't explain why this suddenly a problem with R80.10. I'd upgrade to R80.30 if possible and definitly fine-tune your thread prevention profile. An application control rule to limit non-essential apps like twitch, youtube and spotify to 50mbit also works wonders sometimes.&lt;BR /&gt;I assume you're using an open server with 6 cores, licensed for 4 cores, right? Can you post the output of "enabled_blades" and "fwaccel stats -s" please?</description>
      <pubDate>Tue, 07 Apr 2020 16:14:35 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/High-CPU-Load-while-packets-processed-through-slow-path/m-p/81159#M16393</guid>
      <dc:creator>Benedikt_Weissl</dc:creator>
      <dc:date>2020-04-07T16:14:35Z</dc:date>
    </item>
    <item>
      <title>Re: High CPU Load while packets processed through slow path</title>
      <link>https://community.checkpoint.com/t5/General-Topics/High-CPU-Load-while-packets-processed-through-slow-path/m-p/81164#M16396</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.checkpoint.com/t5/user/viewprofilepage/user-id/13373"&gt;@Martin_Reppich&lt;/a&gt;,&lt;/P&gt;
&lt;P&gt;You can see that the main part of the packets goes through the SecureXL past path red marked.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;| Traffic Rate: |&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;| Total&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; FW&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp; PXL&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; SecureXL |&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;| Inbound packets/sec 155K &amp;nbsp; &amp;nbsp; &lt;FONT color="#339966"&gt;9,255&amp;nbsp;&amp;nbsp; &amp;nbsp; 1,432&amp;nbsp;&lt;/FONT&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;FONT color="#FF0000"&gt;145K&lt;/FONT&gt; |&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;| Outbound packets/sec 156K&amp;nbsp; &lt;FONT color="#339966"&gt;9,805&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 1,432&amp;nbsp;&amp;nbsp;&lt;/FONT&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;FONT color="#FF0000"&gt;145K&lt;/FONT&gt; |&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;| Inbound bits/sec 958M&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;FONT color="#339966"&gt;6,380K&amp;nbsp;&amp;nbsp; 10,354K&lt;/FONT&gt;&amp;nbsp;&amp;nbsp; &lt;FONT color="#FF0000"&gt;941M&lt;/FONT&gt; |&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;| Outbound bits/sec 1,002M&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;FONT color="#339966"&gt;33,917K 10,537K&lt;/FONT&gt; &lt;FONT color="#FF0000"&gt;958M&lt;/FONT&gt; |&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Screenshot_20200407-183550_Edge.jpg" style="width: 999px;"&gt;&lt;img src="https://community.checkpoint.com/t5/image/serverpage/image-id/5440i31FF42F55281C71D/image-size/large?v=v2&amp;amp;px=999" role="button" title="Screenshot_20200407-183550_Edge.jpg" alt="Screenshot_20200407-183550_Edge.jpg" /&gt;&lt;/span&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;More to firewall path F2F, PXL, Acceleration path read here in my article:&lt;BR /&gt;&lt;A href="https://community.checkpoint.com/docs/DOC-3041-r80x-security-gateway-architecture-logical-packet-flow" target="_blank" rel="noopener"&gt;RIt could also be elephant flows!80.x - Security Gateway Architecture (Logical Packet Flow)&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Please could you&amp;nbsp; provide the following information.&amp;nbsp;Then I can be more specific.&lt;/P&gt;
&lt;P&gt;# fwaccel stats&lt;BR /&gt;# fw ctl affinity -l&lt;BR /&gt;# top&amp;nbsp; | grep fw_worker&lt;BR /&gt;&lt;BR /&gt;Do you use 10GBit/s or 1 GBit/s interfaces?&lt;/P&gt;
&lt;P&gt;With 1 GBit/s interfaces, your interface can be busy (red).&lt;BR /&gt;&lt;SPAN&gt; Inbound bits/sec 958M&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;FONT color="#339966"&gt;6,380K&amp;nbsp;&amp;nbsp; 10,354K&lt;/FONT&gt;&amp;nbsp;&amp;nbsp; &lt;FONT color="#FF0000"&gt;941M&lt;/FONT&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;| Outbound bits/sec 1,002M&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;FONT color="#339966"&gt;33,917K 10,537K&lt;/FONT&gt; &lt;FONT color="#FF0000"&gt;958M&lt;BR /&gt;&lt;/FONT&gt;&lt;/SPAN&gt;&lt;BR /&gt;Provide infos to nic errors:&lt;BR /&gt;# netstat -in&lt;/P&gt;
&lt;P&gt;With 10GBit/s interfaces enable multi queueing. More read here:&amp;nbsp;&lt;A href="https://community.checkpoint.com/docs/DOC-3352-r80x-performance-tuning-tip-multi-queue" target="_blank" rel="noopener"&gt;R80.x - Performance Tuning Tip - Multi Queue&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;To reduce the load on the CoreXL instances, you can do the following:&lt;BR /&gt;- use IPS default profile (only events with no performnce impact)&lt;BR /&gt;- use IPS only to and from the internet&lt;BR /&gt;- optimize av, anti-bot, https interception,...&lt;BR /&gt;&amp;nbsp;&lt;BR /&gt;It could also be elephant flows!&lt;/P&gt;
&lt;P&gt;The big question is, how do you found elephat flows on an R80 gateway?&lt;BR /&gt;Evaluation of heavy connections (epehant flows)&lt;BR /&gt;&lt;BR /&gt;A first indication is a high CPU load on a core if all other cores have a normal CPU load. This can be displayed very nicely with "top". Ok, now a core has 100% CPU usage. What can we do now? For this there is a&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails=&amp;amp;solutionid=sk105762&amp;amp;partition=General&amp;amp;product=Security" target="_self" rel="nofollow noopener noreferrer"&gt;SK105762&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;to activate "Firewall Priority Queues".&amp;nbsp; This feature allows the administrator to monitor the heavy connections that consume the most CPU resources without interrupting the normal operation of the Firewall. After enabling this feature, the relevant information is available in CPView Utility. The system saves heavy connection data for the last 24 hours and CPDiag has a matching collector which uploads this data for diagnosis purposes.&lt;/P&gt;
&lt;P&gt;Heavy connection flow system definition on Check Point gateways:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Specific instance CPU is over 60%&lt;/LI&gt;
&lt;LI&gt;Suspected connection lasts more than 10s&lt;/LI&gt;
&lt;LI&gt;Suspected connection utilizes more than 50% of the total work the instance does. In other words, connection CPU utilization must be &amp;gt; 30% &amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Enable the monitoring of heavy connections.&lt;/P&gt;
&lt;P&gt;To enable the monitoring of heavy connections that consume high CPU resources:&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;#&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;STRONG&gt;fw ctl multik prioq 1&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;#&lt;/STRONG&gt;&lt;STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;reboot&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Found heavy connection on the gateway with print_heavy connections“&lt;/P&gt;
&lt;P&gt;On the system itself, heavy connection data is accessible using the command:&lt;BR /&gt;&lt;BR /&gt;#&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;fw ctl multik print_heavy_conn&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Found heavy connection on the gateway with cpview&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;#&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/EM&gt;&lt;STRONG&gt;cpview&lt;/STRONG&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; CPU &amp;gt; Top-Connection &amp;gt; InstancesX&lt;/P&gt;
&lt;P&gt;More read here:&lt;BR /&gt;&lt;A href="https://community.checkpoint.com/t5/General-Topics/R80-x-Performance-Tuning-Tip-Elephant-Flows-Heavy-Connections/m-p/69105/highlight/true#M14059" target="_self"&gt;R80.x - Performance Tuning Tip - Elephant Flows (Heavy Connections)&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 07 Apr 2020 17:19:40 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/High-CPU-Load-while-packets-processed-through-slow-path/m-p/81164#M16396</guid>
      <dc:creator>HeikoAnkenbrand</dc:creator>
      <dc:date>2020-04-07T17:19:40Z</dc:date>
    </item>
    <item>
      <title>Re: High CPU Load while packets processed through slow path</title>
      <link>https://community.checkpoint.com/t5/General-Topics/High-CPU-Load-while-packets-processed-through-slow-path/m-p/81174#M16397</link>
      <description>&lt;P&gt;Hello Wolfgang,&lt;/P&gt;&lt;P&gt;it's open server hardware.&lt;/P&gt;&lt;P&gt;We use two IBM x3550 M4.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Best regards&lt;BR /&gt;Martin&lt;/P&gt;</description>
      <pubDate>Tue, 07 Apr 2020 18:11:53 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/High-CPU-Load-while-packets-processed-through-slow-path/m-p/81174#M16397</guid>
      <dc:creator>Martin_Reppich</dc:creator>
      <dc:date>2020-04-07T18:11:53Z</dc:date>
    </item>
    <item>
      <title>Re: High CPU Load while packets processed through slow path</title>
      <link>https://community.checkpoint.com/t5/General-Topics/High-CPU-Load-while-packets-processed-through-slow-path/m-p/81179#M16399</link>
      <description>&lt;P&gt;Hi Benedikt,&lt;/P&gt;&lt;P&gt;yes we use open server hardware (IBM x3550 M4) and have licensed 4 cores.&lt;BR /&gt;Currently we don't use any threat prevention. Here are the outputs you asked for.&lt;/P&gt;&lt;P&gt;&lt;EM&gt;[Expert@gw2:0]# enabled_blades&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;fw vpn mon vpn&lt;/EM&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;EM&gt;[Expert@gw2:0]# fwaccel stats -s&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;Accelerated conns/Total conns : 24367/32954 (73%)&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;Accelerated pkts/Total pkts : 51251182933/57415297311 (89%)&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;F2Fed pkts/Total pkts : 6074120093/57415297311 (10%)&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;PXL pkts/Total pkts : 89994285/57415297311 (0%)&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;QXL pkts/Total pkts : 0/57415297311 (0%)&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Kind regards&lt;BR /&gt;Martin&lt;/P&gt;</description>
      <pubDate>Tue, 07 Apr 2020 18:30:47 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/High-CPU-Load-while-packets-processed-through-slow-path/m-p/81179#M16399</guid>
      <dc:creator>Martin_Reppich</dc:creator>
      <dc:date>2020-04-07T18:30:47Z</dc:date>
    </item>
    <item>
      <title>Re: High CPU Load while packets processed through slow path</title>
      <link>https://community.checkpoint.com/t5/General-Topics/High-CPU-Load-while-packets-processed-through-slow-path/m-p/81190#M16403</link>
      <description>&lt;P&gt;Hi &lt;a href="https://community.checkpoint.com/t5/user/viewprofilepage/user-id/21670"&gt;@HeikoAnkenbrand&lt;/a&gt;&lt;/P&gt;&lt;P&gt;at first thanks for detailed reply.&lt;/P&gt;&lt;P&gt;I agree with you that most of the packets goes through SecureXL fast path.&lt;BR /&gt;You refered to the output of CPVIEW Traffic Rate when the cpu utilization was fine.&lt;BR /&gt;In my second output with traffic rate also most of the packets go through SecureXL fast path but additionaly the packet rate for slow path is increased and after reading serveral posts regarding high cpu load i thought this probably is the reason in a kind of way.&lt;/P&gt;&lt;P&gt;We use two 10Bit/s interfaces with VLANs where the most traffic went through and four dedicated 1GBit/s interfaces without VLANs.&lt;BR /&gt;We don't use any threat prevention only the default inspection which is installed with the access policy is active.&lt;/P&gt;&lt;P&gt;Here are the outputs you asked for.&lt;/P&gt;&lt;P&gt;&lt;EM&gt;[Expert@gw2:0]# enabled_blades&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;fw vpn mon vpn&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;EM&gt;[Expert@gw2:0]# fwaccel stats&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;Name Value Name Value&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;-------------------- --------------- -------------------- ---------------&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Accelerated Path&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;------------------------------------------------------------------------------&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;accel packets 51409121939 accel bytes 35156919263700&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;conns created 301133996 conns deleted 180397325&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;C total conns 32193 C templates 0&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;C TCP conns 23063 C delayed TCP conns 0&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;C non TCP conns 9130 C delayed nonTCP con 0&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;conns from templates 3272 temporary conns 15524747&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;nat conns 5879735 dropped packets 3603731&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;dropped bytes 302928490 nat templates 0&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;port alloc templates 0 conns from nat tmpl 0&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;port alloc conns 0 conns auto expired 105182996&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Accelerated VPN Path&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;------------------------------------------------------------------------------&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;C crypt conns 32 enc bytes 1143833360&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;dec bytes 27895854656 ESP enc pkts 13239884&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;ESP enc err 42 ESP dec pkts 22706430&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;ESP dec err 0 ESP other err 40&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;AH enc pkts 0 AH enc err 0&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;AH dec pkts 0 AH dec err 0&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;AH other err 0 espudp enc pkts 0&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;espudp enc err 0 espudp dec pkts 0&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;espudp dec err 0 espudp other err 0&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Medium Path&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;------------------------------------------------------------------------------&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;PXL packets 90048662 PXL async packets 90073798&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;PXL bytes 67374416312 C PXL conns 40&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;C PXL templates 0 PXL FF conns 0&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;PXL FF packets 0 PXL FF bytes 0&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;PXL FF acks 0&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Accelerated QoS Path&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;------------------------------------------------------------------------------&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;QXL packets 0 QXL async packets 0&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;QXL bytes 0 C QXL conns 0&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;C QXL templates 0&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Firewall Path&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;------------------------------------------------------------------------------&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;F2F packets 6090346522 F2F bytes 458028184070&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;C F2F conns 8671 TCP violations 22964531&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;C partial conns 0 C anticipated conns 0&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;port alloc f2f 0 C no-match ranges 0&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;GTP&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;------------------------------------------------------------------------------&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;gtp tunnels created 0 gtp tunnels 0&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;gtp accel pkts 0 gtp f2f pkts 0&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;gtp spoofed pkts 0 gtp in gtp pkts 0&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;gtp signaling pkts 0 gtp tcpopt pkts 0&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;gtp apn err pkts 0&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;General&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;------------------------------------------------------------------------------&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;memory used 0 free memory 0&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;C used templates 0 pxl tmpl conns 0&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;C conns from tmpl 0 C non TCP F2F conns 239&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;C tcp handshake conn 3271 C tcp established co 15167&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;C tcp closed conns 4625 C tcp f2f handshake 297&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;C tcp f2f establishe 7689 C tcp f2f closed con 446&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;C tcp pxl handshake 0 C tcp pxl establishe 39&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;C tcp pxl closed con 1 outbound packets 51409121237&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;outbound pxl packets 90048662 outbound f2f packets 6107881879&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;outbound bytes 35899348079681 outbound pxl bytes 68762023621&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;outbound f2f bytes 3349002292752&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;(*) Statistics marked with C refer to current value, others refer to total value&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;EM&gt;[Expert@gw2:0]# fw ctl affinity -l&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;eth0: CPU 0&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;eth1: CPU 0&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;eth2: CPU 0&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;eth3: CPU 0&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;eth8: CPU 0&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;eth9: CPU 0&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;Kernel fw_0: CPU 3&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;Kernel fw_1: CPU 2&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;Kernel fw_2: CPU 1&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;Daemon mpdaemon: CPU 1 2 3&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;Daemon vpnd: CPU 1 2 3&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;Daemon fwd: CPU 1 2 3&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;Daemon in.asessiond: CPU 1 2 3&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;Daemon lpd: CPU 1 2 3&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;Daemon cpd: CPU 1 2 3&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;Daemon cprid: CPU 1 2 3&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;The current license permits the use of CPUs 0, 1, 2, 3 only.&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;EM&gt;[Expert@gw2:0]# top | grep fw_worker&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;5712 admin 15 0 0 0 0 R 32 0.0 7431:42 fw_worker_0&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;5713 admin 15 0 0 0 0 S 32 0.0 6815:17 fw_worker_1&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;5714 admin 15 0 0 0 0 R 24 0.0 5219:04 fw_worker_2&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;EM&gt;[Expert@gw2:0]# netstat -in&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;No RX-ERR and TX-ERR on any Interface.&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;I read your article to firewall path and activate the monitoring of heavy connections.&lt;/P&gt;&lt;P&gt;Thank you and best regards.&lt;BR /&gt;Martin&lt;/P&gt;</description>
      <pubDate>Tue, 07 Apr 2020 19:06:43 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/High-CPU-Load-while-packets-processed-through-slow-path/m-p/81190#M16403</guid>
      <dc:creator>Martin_Reppich</dc:creator>
      <dc:date>2020-04-07T19:06:43Z</dc:date>
    </item>
    <item>
      <title>Re: High CPU Load while packets processed through slow path</title>
      <link>https://community.checkpoint.com/t5/General-Topics/High-CPU-Load-while-packets-processed-through-slow-path/m-p/81212#M16409</link>
      <description>&lt;P&gt;Item 1:&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;we moved to R80 in February and since then from time to time we receive alert Mail from our SMS (R80.30) that it lost connection to the active gateway of our Check Point Cluster (R80.10).&amp;nbsp; The result of my investigation was a high CPU Load (100%) on all cores due to high load on the fw_worker processes across this period.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;F2F/slowpath is 6.3M inbound and 33.9M outbound by one measurement and 12.4M inbound and 24.7 M outbound in another measurement.&amp;nbsp; Traffic directly to and from the gateway (including logging) always goes F2F, and these imbalances and the behavior described above screams excessive logging rate, and fwd is starving out when the 3 worker cores get busy, not sure if that is just a symptom or cause.&amp;nbsp; Would suggest trying to reduce logging rate, focus on disabling logging for DNS, NetBIOS and other UDP-based protocols, perhaps HTTP connections as well.&lt;/P&gt;
&lt;P&gt;Item 2: There is only one SND core so Multi-Queue will not help.&lt;/P&gt;
&lt;P&gt;Item 3: Please provide output of &lt;STRONG&gt;fwaccel stat&lt;/STRONG&gt;, packet and connections acceleration looks good but if NAT templates are disabled it will cause a lot of F2F and worker overhead if you have a high new connection rate, see next item.&amp;nbsp; Would caution against enabling NAT templates unless latest Jumbo HFA has been installed on your R80.10 gateway, early implementations of NAT Templates caused SecureXL problems when enabled.&lt;/P&gt;
&lt;P&gt;Item 4: During a high CPU period please provide new connection rate, on &lt;STRONG&gt;cpview&lt;/STRONG&gt; screen this can be seen on the Overview page, scroll down to the Network section make sure to show the whole Network section.&lt;/P&gt;
&lt;P&gt;Item 5: Your single SND core seems to be doing OK so there is probably not much RX-DRP, but your bandwidth numbers under load are suspiciously close to exactly 1Gbit as Heiko noticed, if you are pushing a 1Gbps interface that close to its theoretical limit you are probably racking up overruns (RX-OVR) like crazy.&amp;nbsp; Please provide full output of &lt;STRONG&gt;netstat -ni&lt;/STRONG&gt; for analysis.&lt;/P&gt;
&lt;P&gt;Item 6: You're not using SHA-384 for any of your VPN's are you?&amp;nbsp; Looks like most of your VPN traffic is fully accelerated but using SHA-384 will cause VPN traffic to go F2F.&lt;/P&gt;
&lt;P&gt;Item 7: Even under load it looks like your firewall workers are pretty evenly balanced by the Dynamic Dispatcher, so I doubt it is an elephant flow issue but still worth investigating as Heiko mentioned.&lt;/P&gt;
&lt;P&gt;Item 8: High load can be caused by an overloaded sync network in a cluster, please provide output of &lt;STRONG&gt;fw ctl pstat&lt;/STRONG&gt;.&amp;nbsp; Might need to do selective synchronization for protocols mentioned in Item 1, as combo of suddenly high connection rates and having to state sync them too is a nasty double whammy on the workers.&lt;/P&gt;
&lt;P&gt;Item 9: Doubtful you are having memory issues, but please provide output of &lt;STRONG&gt;free -m&lt;/STRONG&gt; anyway.&lt;/P&gt;
&lt;P&gt;Item 10: You could be getting flooded with fragmented packets during a high CPU event (frags always go F2F in your version), please provide output of &lt;STRONG&gt;fwaccel stats -p&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 07 Apr 2020 22:06:10 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/High-CPU-Load-while-packets-processed-through-slow-path/m-p/81212#M16409</guid>
      <dc:creator>Timothy_Hall</dc:creator>
      <dc:date>2020-04-07T22:06:10Z</dc:date>
    </item>
    <item>
      <title>Re: High CPU Load while packets processed through slow path</title>
      <link>https://community.checkpoint.com/t5/General-Topics/High-CPU-Load-while-packets-processed-through-slow-path/m-p/81225#M16412</link>
      <description>Any particular reason you're still on R80.10 for your gateway?&lt;BR /&gt;Lot of improvements in R80.20+ performance wise.&lt;BR /&gt;Not saying it will solve this particular issue but worth considering.</description>
      <pubDate>Wed, 08 Apr 2020 00:25:33 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/High-CPU-Load-while-packets-processed-through-slow-path/m-p/81225#M16412</guid>
      <dc:creator>PhoneBoy</dc:creator>
      <dc:date>2020-04-08T00:25:33Z</dc:date>
    </item>
    <item>
      <title>Re: High CPU Load while packets processed through slow path</title>
      <link>https://community.checkpoint.com/t5/General-Topics/High-CPU-Load-while-packets-processed-through-slow-path/m-p/81253#M16417</link>
      <description>&lt;P&gt;We did the upgrade with a service provider and they recommend to go to R80.10 and wait until R80.30 with kernel 3.10 is available for our open server IBM x3550 M4.&lt;/P&gt;&lt;P&gt;If such an release isn't planed we probably can talk about an upgrade too.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 08 Apr 2020 05:15:12 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/High-CPU-Load-while-packets-processed-through-slow-path/m-p/81253#M16417</guid>
      <dc:creator>Martin_Reppich</dc:creator>
      <dc:date>2020-04-08T05:15:12Z</dc:date>
    </item>
    <item>
      <title>Re: High CPU Load while packets processed through slow path</title>
      <link>https://community.checkpoint.com/t5/General-Topics/High-CPU-Load-while-packets-processed-through-slow-path/m-p/81254#M16418</link>
      <description>We're not planning support for additional Open Servers on R80.30-3.10.&lt;BR /&gt;R80.40 was released for all supported appliances with 3.10 kernel (no more 2.6 kernel).</description>
      <pubDate>Wed, 08 Apr 2020 05:17:24 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/High-CPU-Load-while-packets-processed-through-slow-path/m-p/81254#M16418</guid>
      <dc:creator>PhoneBoy</dc:creator>
      <dc:date>2020-04-08T05:17:24Z</dc:date>
    </item>
    <item>
      <title>Re: High CPU Load while packets processed through slow path</title>
      <link>https://community.checkpoint.com/t5/General-Topics/High-CPU-Load-while-packets-processed-through-slow-path/m-p/81258#M16419</link>
      <description>&lt;P&gt;you can print violations for SXL&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;SecureXL violations (F2F packets) -&amp;nbsp;&amp;nbsp;fwaccel stats -p&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Did you manage to find what kind of traffic is that? Maybe some DoS or DDoS from Internet, fragmented traffic,&amp;nbsp;IP options, some other protocol that is not TCP/UDP, &lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;what kind of encryption algorithms you are using for VPN?&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;Maybe activate drop templates?&lt;/P&gt;&lt;P&gt;CPVIEW - top connections in the time of the problem.&lt;/P&gt;&lt;P&gt;Or analyze traffic by&amp;nbsp;Traffic analysis using the 'CPMonitor' tool -&amp;nbsp;&lt;SPAN&gt;sk103212&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 08 Apr 2020 06:28:05 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/High-CPU-Load-while-packets-processed-through-slow-path/m-p/81258#M16419</guid>
      <dc:creator>Martin_Raska</dc:creator>
      <dc:date>2020-04-08T06:28:05Z</dc:date>
    </item>
    <item>
      <title>Re: High CPU Load while packets processed through slow path</title>
      <link>https://community.checkpoint.com/t5/General-Topics/High-CPU-Load-while-packets-processed-through-slow-path/m-p/81272#M16420</link>
      <description>&lt;P&gt;Hi &lt;a href="https://community.checkpoint.com/t5/user/viewprofilepage/user-id/13373"&gt;@Martin_Reppich&lt;/a&gt;,&lt;/P&gt;
&lt;P&gt;&lt;a href="https://community.checkpoint.com/t5/user/viewprofilepage/user-id/597"&gt;@Timothy_Hall&lt;/a&gt;&amp;nbsp; has already given a couple of good tips here.&lt;/P&gt;
&lt;P&gt;Here is good to see that the SecureXL must optimize.&lt;/P&gt;
&lt;TABLE border="1" width="100%"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="49.385474860335194%" style="background-color: #c0c0c0;"&gt;Path&lt;/TD&gt;
&lt;TD width="50.50279329608938%" style="background-color: #c0c0c0;"&gt;Packets&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="49.385474860335194%"&gt;Accelerated Path&amp;nbsp;&lt;/TD&gt;
&lt;TD width="50.50279329608938%"&gt;51409121939&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="49.385474860335194%"&gt;F2F Path&lt;/TD&gt;
&lt;TD width="50.50279329608938%"&gt;&amp;nbsp; 6090346522&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="49.385474860335194%"&gt;Accelerated VPN Path ESP enc pkts&lt;/TD&gt;
&lt;TD width="50.50279329608938%"&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 13239884&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="49.385474860335194%"&gt;Medium Path PXL packets&lt;/TD&gt;
&lt;TD width="50.50279329608938%"&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 90048662&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&lt;BR /&gt;If you use 10 Gbps, you should definitely use multi queueing. This means you need more SND's to use MQ. I would definitely try that. More read here&amp;nbsp; &lt;A href="https://community.checkpoint.com/docs/DOC-3352-r80x-performance-tuning-tip-multi-queue" target="_blank" rel="noopener"&gt;R80.x - Performance Tuning Tip - Multi Queue&lt;/A&gt; and here&amp;nbsp;&lt;A href="https://community.checkpoint.com/t5/General-Management-Topics/New-R80-x-Performance-Tuning-Intel-Hardware/m-p/48697/highlight/true#M8306" target="_self"&gt;R80.x - Performance Tuning Tip - Intel Hardware&lt;/A&gt;. &lt;/P&gt;
&lt;P&gt;With the "Accelerated VPN Path" you can see that only low processing is done here and many packets probably run into the F2F path. I would adjust the VPN encryption algorithms here so that more are processed in the "Accelerated VPN Path".&amp;nbsp; With "fwaccel stat" you can see which algorithms are supported in SecureXL.&lt;/P&gt;
&lt;P&gt;Regards&lt;BR /&gt;Heiko&lt;/P&gt;</description>
      <pubDate>Wed, 08 Apr 2020 08:18:32 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/High-CPU-Load-while-packets-processed-through-slow-path/m-p/81272#M16420</guid>
      <dc:creator>HeikoAnkenbrand</dc:creator>
      <dc:date>2020-04-08T08:18:32Z</dc:date>
    </item>
    <item>
      <title>Re: High CPU Load while packets processed through slow path</title>
      <link>https://community.checkpoint.com/t5/General-Topics/High-CPU-Load-while-packets-processed-through-slow-path/m-p/81401#M16440</link>
      <description>&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;a href="https://community.checkpoint.com/t5/user/viewprofilepage/user-id/597"&gt;@Timothy_Hall&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thank you for the detailed information.&lt;/P&gt;&lt;P&gt;I answered the Itmes and attached it as txt file for the formatting.&lt;/P&gt;&lt;P&gt;Best regards&lt;BR /&gt;Martin&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 09 Apr 2020 05:30:55 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/High-CPU-Load-while-packets-processed-through-slow-path/m-p/81401#M16440</guid>
      <dc:creator>Martin_Reppich</dc:creator>
      <dc:date>2020-04-09T05:30:55Z</dc:date>
    </item>
    <item>
      <title>Re: High CPU Load while packets processed through slow path</title>
      <link>https://community.checkpoint.com/t5/General-Topics/High-CPU-Load-while-packets-processed-through-slow-path/m-p/81544#M16475</link>
      <description>&lt;P&gt;Item 1: OK great so it looks like you did have excessive logging due to duplicating your logs to another log server, good to see a 10% drop in CPU.&lt;/P&gt;
&lt;P&gt;Item 3: While your accept templating rate is pretty good, due to output of &lt;STRONG&gt;fwaccel stat&lt;/STRONG&gt; take a look at rule #11.&amp;nbsp; You probably have some kind of DCE/RPC service in use there.&amp;nbsp; Try to move that rule or service as far down in your rulebase as you can, right before your cleanup rule if possible.&amp;nbsp; Install policy and check &lt;STRONG&gt;fwaccel stat&lt;/STRONG&gt; again, rinse and repeat until disablement of template offloads doesn't happen until one of the very last rules in your rulebase, this will help reduce F2F by increasing connection templating rate even further.&lt;/P&gt;
&lt;P&gt;NAT Templates are disabled as expected, it might help to enable them but we won't know until I see new connection rate during high CPU period as requested in Item 4.&lt;/P&gt;
&lt;P&gt;Item 5: Your interfaces are running clean and your single SND is doing fine with the load; you don't need to adjust your CoreXL split to add another SND or enable Multi-Queue.&amp;nbsp; Some negligible RX-DRP on eth9 but not nearly enough to worry about.&lt;/P&gt;
&lt;P&gt;Item 6: Confirmed no SHA-384 in use for VPNs.&lt;/P&gt;
&lt;P&gt;Item 7: Definitely do NOT want to adjust CoreXL split and reduce number of firewall workers.&lt;/P&gt;
&lt;P&gt;Item 8: Hmm your sync network does appear to be struggling a bit based on &lt;STRONG&gt;fw ctl pstat&lt;/STRONG&gt; (but underlying network sync interface is running clean) which can cause high CPU on firewall workers, would suggest disabling state synchronization for DNS and HTTP for sure and perhaps HTTPS service as well like this:&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="dns.jpg" style="width: 594px;"&gt;&lt;img src="https://community.checkpoint.com/t5/image/serverpage/image-id/5495i1DA2CF0AA22E7214/image-size/large?v=v2&amp;amp;px=999" role="button" title="dns.jpg" alt="dns.jpg" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;Also look at Top-Protocols in &lt;STRONG&gt;cpview&lt;/STRONG&gt; and consider disabling synchronization for those services as well to help reduce sync traffic.&lt;/P&gt;
&lt;P&gt;Item 9: Memory looks good, no allocation failures.&lt;/P&gt;
&lt;P&gt;Item 10: Given overall number of frames, frag numbers do not look excessive.&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Summary: Major action items are #3 and #8 which should help quite a bit.&amp;nbsp; Beyond that enabling NAT Templates (with latest Jumbo HFA) may help, but we need to see Network screen of cpview and new connection rate during a high CPU period first.&lt;/EM&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 10 Apr 2020 11:40:27 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/High-CPU-Load-while-packets-processed-through-slow-path/m-p/81544#M16475</guid>
      <dc:creator>Timothy_Hall</dc:creator>
      <dc:date>2020-04-10T11:40:27Z</dc:date>
    </item>
    <item>
      <title>Re: High CPU Load while packets processed through slow path</title>
      <link>https://community.checkpoint.com/t5/General-Topics/High-CPU-Load-while-packets-processed-through-slow-path/m-p/81789#M16543</link>
      <description>&lt;P&gt;&lt;STRONG&gt;Item 3:&lt;/STRONG&gt; Rule #11 had zero hits since the upgrade and I checked it. This rule is not necessary anymore so I disabled it for now. Afterwards the output of &lt;STRONG&gt;fwaccel stat&lt;/STRONG&gt; shows the following.&lt;/P&gt;&lt;P&gt;Accept Templates : disabled by Firewall&lt;BR /&gt;Layer Network disables template offloads from rule #69&lt;BR /&gt;Throughput acceleration still enabled.&lt;/P&gt;&lt;P&gt;Rule #69 is a rule for nfs to a ftp-filer and has the service group "NFS" included which has RPC-Services inside.&amp;nbsp;&lt;/P&gt;&lt;P&gt;I analyzed the rule and only five services are matching. Thats why I changed the current services from &lt;STRONG&gt;NFS, udp/111, tcp/111&lt;/STRONG&gt; and &lt;STRONG&gt;tcp-high-ports&lt;/STRONG&gt; to&amp;nbsp;&lt;STRONG&gt;nfsd-tcp (2049), nfsd (2049/udp), tcp/111, udp/111&lt;/STRONG&gt; and &lt;STRONG&gt;tcp/4046&lt;/STRONG&gt;.&lt;/P&gt;&lt;P&gt;Of course after performing&amp;nbsp;&lt;STRONG&gt;fwaccel stat &lt;/STRONG&gt;now again i've got a new rule which i have to analyze. I probably I will go your way and put the upcoming rules from &lt;STRONG&gt;fwaccel stat&lt;/STRONG&gt; to the bottom of the rule base. It needs a little time because I have to check if it is easyly possible to move those rules down in context to the rulebase.&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Item4:&lt;/STRONG&gt; Last weekend the problem occured again. Here are the first outputs of the cpview overview after it startet at 11Apr2020 03:13:27 AM. The period ended at 12Apr2020 22:35.&lt;/P&gt;&lt;P&gt;&lt;FONT color="#FF0000"&gt;&lt;STRONG&gt;&lt;U&gt;1Apr2020 03:13:27 AM&lt;/U&gt;&lt;/STRONG&gt;&lt;/FONT&gt;&lt;BR /&gt;CPU Used&lt;BR /&gt;1 25%&lt;BR /&gt;2 25%&lt;BR /&gt;3 24%&lt;BR /&gt;----------------------------------------&lt;BR /&gt;Network:&lt;BR /&gt;Bits/sec 184M&lt;BR /&gt;Packets/sec 43,240&lt;BR /&gt;Connections/sec 332&lt;BR /&gt;Concurrent connections 24,367&lt;BR /&gt;&lt;BR /&gt;&lt;FONT color="#FF0000"&gt;&lt;U&gt;&lt;STRONG&gt;11Apr2020 3:13:28&lt;/STRONG&gt; &lt;/U&gt;&lt;/FONT&gt;&lt;BR /&gt;CPU Used&lt;BR /&gt;1 81%&lt;BR /&gt;2 81%&lt;BR /&gt;3 81%&lt;BR /&gt;----------------------------------------&lt;BR /&gt;Network:&lt;BR /&gt;Bits/sec 261M&lt;BR /&gt;Packets/sec 49,371&lt;BR /&gt;Connections/sec 362&lt;BR /&gt;Concurrent connections 25,677&lt;BR /&gt;&lt;BR /&gt;&lt;FONT color="#FF0000"&gt;&lt;U&gt;&lt;STRONG&gt;11Apr2020 3:15:27&lt;/STRONG&gt;&lt;/U&gt;&lt;/FONT&gt;&lt;BR /&gt;CPU Used&lt;BR /&gt;1 99%&lt;BR /&gt;2 99%&lt;BR /&gt;3 98%&lt;BR /&gt;----------------------------------------&lt;BR /&gt;Network:&lt;BR /&gt;Bits/sec 270M&lt;BR /&gt;Packets/sec 51,216&lt;BR /&gt;Connections/sec 370&lt;BR /&gt;Concurrent connections 25,425&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;We had a failover at 11Apr2020 10:38:56 and I attached two screenshots because I mentioned that the rulebase drop rate for this period rised strongly.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Failover from GW2 --&amp;gt; GW1&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="RuleBaseDropRate-gw2.JPG" style="width: 771px;"&gt;&lt;img src="https://community.checkpoint.com/t5/image/serverpage/image-id/5552i33D09D2CD015D403/image-dimensions/771x421?v=v2" width="771" height="421" role="button" title="RuleBaseDropRate-gw2.JPG" alt="RuleBaseDropRate-gw2.JPG" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="RuleBaseDropRate-gw1.JPG" style="width: 769px;"&gt;&lt;img src="https://community.checkpoint.com/t5/image/serverpage/image-id/5551i9C85EC7B05388D02/image-dimensions/769x614?v=v2" width="769" height="614" role="button" title="RuleBaseDropRate-gw1.JPG" alt="RuleBaseDropRate-gw1.JPG" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Item 8:&lt;/STRONG&gt; I disabled the synchronization for domain-udp, http and https. I will have a look on the average load.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks and best regards&lt;/P&gt;&lt;P&gt;Martin&lt;/P&gt;</description>
      <pubDate>Tue, 14 Apr 2020 11:12:39 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/High-CPU-Load-while-packets-processed-through-slow-path/m-p/81789#M16543</guid>
      <dc:creator>Martin_Reppich</dc:creator>
      <dc:date>2020-04-14T11:12:39Z</dc:date>
    </item>
    <item>
      <title>Re: High CPU Load while packets processed through slow path</title>
      <link>https://community.checkpoint.com/t5/General-Topics/High-CPU-Load-while-packets-processed-through-slow-path/m-p/81797#M16547</link>
      <description>&lt;P&gt;&lt;STRONG&gt;Item 3:&lt;/STRONG&gt; Rule #11 had zero hits since the upgrade and I checked it. This rule is not necessary anymore so I disabled it for now. Afterwards the output of fwaccel stat shows the following.&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Accept Templates : disabled by Firewall&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;Layer Network disables template offloads from rule #69&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;Throughput acceleration still enabled.&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;Rule #69 is a rule for nfs to a ftp-filer and has the service group "NFS" included which has RPC-Services inside.&amp;nbsp;&lt;/P&gt;&lt;P&gt;I analyzed the rule and saw that only five services are matching. These are&amp;nbsp;&lt;STRONG&gt;nfsd-tcp (2049), nfsd (udp/2049), tcp/111, udp/111 and tcp/4046&lt;/STRONG&gt;.&lt;BR /&gt;I reduced the services for the rule to the necessary ones from&lt;SPAN&gt;&amp;nbsp;formerly&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;NFS (Group), udp-111, tcp-111 and tcp-high-ports.&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;After installation of the policy a new rule is given when executing fwaccel stat. I try to follow your advice and move those rules down to the rulebase. It will need a little time because I have to check that does not have negative effect in context to the rule base.&lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt;Item 4:&lt;/STRONG&gt; The problem occured again last weekend for the period 11Apr 03:13:27 AM until 12Apr 10:35:00 PM. Here are the requested outputs of CPVIEW Overview when the cpu load starts to increase.&lt;BR /&gt;&lt;BR /&gt;&lt;U&gt;11Apr2020 3:13:27&lt;/U&gt;&lt;BR /&gt;| CPU Used&lt;BR /&gt;| 1 25%&lt;BR /&gt;| 2 25%&lt;BR /&gt;| 3 24%&lt;BR /&gt;| ----------------------------------------&lt;BR /&gt;| Network:&lt;BR /&gt;| Bits/sec 184M&lt;BR /&gt;| Packets/sec 43,240&lt;BR /&gt;| Connections/sec 332&lt;BR /&gt;| Concurrent connections 24,367&lt;BR /&gt;| ----------------------------------------&lt;BR /&gt;&lt;BR /&gt;&lt;U&gt;11Apr2020 3:13:28&lt;/U&gt;&lt;BR /&gt;| CPU Used&lt;BR /&gt;| 1 81%&lt;BR /&gt;| 2 81%&lt;BR /&gt;| 3 81%&lt;BR /&gt;| ----------------------------------------&lt;BR /&gt;| Network:&lt;BR /&gt;| Bits/sec 261M&lt;BR /&gt;| Packets/sec 49,371&lt;BR /&gt;| Connections/sec 362&lt;BR /&gt;| Concurrent connections 25,677&lt;BR /&gt;| ----------------------------------------&lt;BR /&gt;&lt;BR /&gt;&lt;U&gt;11Apr2020 3:15:27&lt;/U&gt;&lt;BR /&gt;&amp;nbsp;CPU Used&lt;BR /&gt;| 1 99%&lt;BR /&gt;| 2 99%&lt;BR /&gt;| 3 98%&lt;BR /&gt;| ----------------------------------------&lt;BR /&gt;| Network:&lt;BR /&gt;| Bits/sec 270M&lt;BR /&gt;| Packets/sec 51,216&lt;BR /&gt;| Connections/sec 370&lt;BR /&gt;| Concurrent connections 25,425&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;In this time a failover happend because one interface was down.&lt;BR /&gt;&lt;STRONG&gt;Failover GW2 --&amp;gt; GW1&lt;/STRONG&gt;&lt;BR /&gt;I mentioned a huge increase of firewall rulebase drop in this time. You can see this in the screenshots.&lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt;Item 8:&lt;/STRONG&gt; As suggested I deactivated the synchronization for domain-udp, http and https.&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Thank you, best regards&lt;BR /&gt;Martin&lt;/P&gt;</description>
      <pubDate>Tue, 14 Apr 2020 12:38:57 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/High-CPU-Load-while-packets-processed-through-slow-path/m-p/81797#M16547</guid>
      <dc:creator>Martin_Reppich</dc:creator>
      <dc:date>2020-04-14T12:38:57Z</dc:date>
    </item>
    <item>
      <title>Re: High CPU Load while packets processed through slow path</title>
      <link>https://community.checkpoint.com/t5/General-Topics/High-CPU-Load-while-packets-processed-through-slow-path/m-p/81799#M16548</link>
      <description>&lt;P&gt;Your concurrent connections numbers are consistently and suspiciously close to 25,000 which is the old default hard limit for number of connections.&amp;nbsp; On the firewall/cluster object under Optimizations, do you have "Capacity Optimization" set to Automatically?&lt;/P&gt;
&lt;P&gt;Assuming you are set for Automatically, it looks like your firewall is getting blasted with drops (not legit accepted connections) during the high CPU period.&amp;nbsp; You will need to look at the firewall logs during the slow periods and try to figure out if these blasts are coming from the inside/DMZ or the outside, I don't think you can view policy drops per interface from something like &lt;STRONG&gt;cpview&lt;/STRONG&gt; but perhaps your graphing application that provided those screenshots can.&amp;nbsp; The policy optimizations you are doing via &lt;STRONG&gt;fwaccel stat&lt;/STRONG&gt; won't help for dropped traffic which still has to go through the whole rulebase.&amp;nbsp; It is also possible these drops are a red herring caused by the failover since you enabled selective sync for various services.&lt;/P&gt;
&lt;P&gt;If the blasts are from the outside, I would suggest enabling the SecureXL penalty box feature which will blacklist the offending IP address(es) and efficiently drop all their traffic for awhile in SecureXL.&amp;nbsp; If the blasts are coming from the inside you'll have some more latitude to hopefully deal with whatever is causing them, be on the lookout for NMS systems sending lots of SNMP/ICMP probes, and auditors running some kind of automated port-scanning tools.&amp;nbsp; &amp;nbsp;&lt;A class="cp_link sc_ellipsis" href="https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails=&amp;amp;solutionid=sk74520&amp;amp;partition=Advanced&amp;amp;product=SecureXL" target="_blank" rel="noopener"&gt;sk74520: What is the SecureXL &lt;STRONG&gt;penalty&lt;/STRONG&gt; &lt;STRONG&gt;box&lt;/STRONG&gt; mechanism for offending IP addresses?&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;The SecureXL penalty box will only drop traffic coming in from outside interfaces by default, but it can be set to penalty-box traffic coming from the inside as well.&amp;nbsp; However I wouldn't recommend doing that until you understand what is going on, as the penalty box will drop ALL traffic from an offending IP address on the inside, so that could adversely impact production.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 14 Apr 2020 13:15:52 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/High-CPU-Load-while-packets-processed-through-slow-path/m-p/81799#M16548</guid>
      <dc:creator>Timothy_Hall</dc:creator>
      <dc:date>2020-04-14T13:15:52Z</dc:date>
    </item>
    <item>
      <title>Re: High CPU Load while packets processed through slow path</title>
      <link>https://community.checkpoint.com/t5/General-Topics/High-CPU-Load-while-packets-processed-through-slow-path/m-p/81951#M16569</link>
      <description>&lt;P&gt;"Capacity Optimization" is set to Automatically. And here is the calculated value of the active node.&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Active Node&lt;/STRONG&gt;&lt;BR /&gt;&lt;FONT color="#999999"&gt;&lt;EM&gt;[Expert@gw1:0]# fw tab -t connections -s&lt;/EM&gt;&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT color="#999999"&gt;&lt;EM&gt;HOST &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp; NAME&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; ID&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; #VALS&amp;nbsp; #PEAK #SLINKS&lt;/EM&gt;&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT color="#999999"&gt;&lt;EM&gt;localhost&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; connections 8158&amp;nbsp;&amp;nbsp;&amp;nbsp; 30566&amp;nbsp;&amp;nbsp; 65171&amp;nbsp;&amp;nbsp;&amp;nbsp; 91753&lt;/EM&gt;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;I used Check Point diagnostics view for the visualization of the cpview history database. In my opinion there is a correlation. The packet rate on our outside interface increases and at the same time the system performance and rulebase drops go up. See the attached Screenshot.&lt;/P&gt;&lt;P&gt;Every Sunday I generate a Rule Usage Report with Tufin Secure Track. The result from last sunday, our most used rule is the CleanUp Rule with 91% (&lt;SPAN&gt;20,694,963,845 Hits)&amp;nbsp;&lt;/SPAN&gt;followed by another CleanUp rule with 2% hits (&lt;SPAN&gt;536,587,767).&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;We have activated Logging for this rules. Maybe not the drop is the issue but the risen logging rate?&amp;nbsp;&lt;/P&gt;&lt;P&gt;And maybe it's useful to say that we have a public class B network.&lt;/P&gt;&lt;P&gt;I filtert with SmartConsole for the last period with high cpu load last weekend (4/11/2020 - 4/13/2020). I have a result for top sources and services. It's suspicious that there are several IP addresses from the same network. I also attached a Screenshot.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;SecureXL penalty box sounds interesting I never heard of it before read the SK at first.&lt;BR /&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Best regards&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 15 Apr 2020 11:49:03 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/High-CPU-Load-while-packets-processed-through-slow-path/m-p/81951#M16569</guid>
      <dc:creator>Martin_Reppich</dc:creator>
      <dc:date>2020-04-15T11:49:03Z</dc:date>
    </item>
    <item>
      <title>Re: High CPU Load while packets processed through slow path</title>
      <link>https://community.checkpoint.com/t5/General-Topics/High-CPU-Load-while-packets-processed-through-slow-path/m-p/81957#M16573</link>
      <description>&lt;P&gt;&lt;EM&gt;&amp;gt; And maybe it's useful to say that we have a public class B network.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;Oooof, yep you are going to have a lot of drops from all the background noise crap of the Internet with a potential attack surface of that size.&amp;nbsp; Looking at your screenshots confirms that you have lots of drops coming in from the Internet.&amp;nbsp; This is a common issue, here is what you need to do:&lt;/P&gt;
&lt;P&gt;1) If possible, configure Geo Policy in a blacklist configuration to drop traffic to/from countries that your organization has no business talking to.&amp;nbsp; &lt;BR /&gt;Geo Policy drops happen very early in INSPECT processing (right after antispoofing), and can substantially lower CPU usage (I've seen CPU load dive 30% after configuring it with clients like yourself who have a large block of Internet-routable addresses).&amp;nbsp; Normally I'd suggest using Geo Updatable Objects here instead, but support for those was added in R80.20 and you're still on R80.10 gateway.&amp;nbsp; Depending on the scope and geographic reach of your organization using Geo Policy may not be feasible though.&amp;nbsp; &lt;STRONG&gt;Edit: Be sure to define a Geo Policy exception for DNS traffic (both TCP and UDP port 53) to avoid some rather random-looking DNS problems that can result, this nasty situation was covered on pages 280-281 of my latest book)&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;2) Enable the SecureXL penalty box.&amp;nbsp; The default is to penalty-box an IP address with 500+ drops/sec for a period of 3 minutes, and only for traffic on the external interface.&amp;nbsp; I'd suggest enabling it with the default settings and see how it behaves for a week or so; assuming it doesn't break anything in production with the defaults, in the real world I usually tweak down the threshold from 500 drops down to the 200 range or so with no adverse impacts.&amp;nbsp; Do not attempt to use the optimized drops feature as it is much older.&lt;/P&gt;
&lt;P&gt;3) Over time if you have repeat offenders that keep getting tossed in the penalty box, create an explicit rule at the top of your rulebase that matches these bad actors and silently drops them with no logging.&lt;/P&gt;
&lt;P&gt;Let us know how it goes...&lt;/P&gt;</description>
      <pubDate>Wed, 15 Apr 2020 14:51:21 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/High-CPU-Load-while-packets-processed-through-slow-path/m-p/81957#M16573</guid>
      <dc:creator>Timothy_Hall</dc:creator>
      <dc:date>2020-04-15T14:51:21Z</dc:date>
    </item>
  </channel>
</rss>

