<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: High CPU on Multi Queue Cores in Firewall and Security Management</title>
    <link>https://community.checkpoint.com/t5/Firewall-and-Security-Management/High-CPU-on-Multi-Queue-Cores/m-p/67555#M5162</link>
    <description>&lt;P&gt;There should be more than 4 queues assigned to those 10Gbps interface by Multi-Queue with an 8/12 split, unless those interfaces are using the igb driver (&lt;STRONG&gt;ethtool -i&lt;/STRONG&gt; to check) which is limited to a maximum of 4 queues due to a driver limitation.&amp;nbsp; If this is the case there is nothing you can do about it, other than using a different NIC that supports the ixgbe driver which can have up to 16 queues.&amp;nbsp; If you are in fact using ixgbe, see&amp;nbsp;&lt;A class="cp_link sc_ellipsis" href="https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails=&amp;amp;solutionid=sk154392&amp;amp;partition=Advanced&amp;amp;product=Security" target="_blank" rel="noopener"&gt;sk154392: An available CPU Core is not handling any queue, when using Multi-Q.&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;It is also possible that your manual process affinity for the fwd daemon is interfering with the assignment of more SND/IRQ cores for traffic processing with Multi-Queue.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As far as why PSLXL is 44% with so few blades enabled, this is probably due to the presence of microsoft-ds traffic (port 445) which by default will be sent to PSLXL.&amp;nbsp; You can confirm by running &lt;STRONG&gt;fwaccel conns | grep 445&lt;/STRONG&gt;.&amp;nbsp; If you see a s/S flag for those connections that indicates the connection is going Medium Path.&amp;nbsp; Also look for other connections that have the s/S flag in the output of &lt;STRONG&gt;fwaccel conns&lt;/STRONG&gt; for clues.&lt;/P&gt;
&lt;P&gt;As far as what you can do about this, if you upgrade to R80.20 Jumbo HFA 103 or later you can force this traffic into the Accelerated Path with &lt;STRONG&gt;fast_accel&lt;/STRONG&gt; as discussed here:&amp;nbsp;&lt;A class="cp_link sc_ellipsis" href="https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails=&amp;amp;solutionid=sk156672&amp;amp;partition=Advanced&amp;amp;product=SecureXL%22" target="_blank" rel="noopener"&gt;sk156672: SecureXL Fast Accelerator (fw &lt;STRONG&gt;fast_accel&lt;/STRONG&gt;) for R80.20 and above&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Be warned however that doing this will "fathpath" the traffic with a minimum of inspection, and bad things security-wise can happen if a threat or other security problem happens in the fastpath'ed connections.&amp;nbsp; Also keep in mind that the load will increase on your SND/IRQ cores as this traffic is forced off the Firewall Workers into SecureXL; you may want to figure out what is going on with Multi-Queue not using all available SND/IRQ cores first.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Sat, 16 Nov 2019 12:57:10 GMT</pubDate>
    <dc:creator>Timothy_Hall</dc:creator>
    <dc:date>2019-11-16T12:57:10Z</dc:date>
    <item>
      <title>High CPU on Multi Queue Cores</title>
      <link>https://community.checkpoint.com/t5/Firewall-and-Security-Management/High-CPU-on-Multi-Queue-Cores/m-p/67532#M5158</link>
      <description>&lt;P&gt;Hardware: 13800 with 20 cores, 8/12 Split, no SMT.&lt;BR /&gt;OS: R80.20 Take 47&lt;BR /&gt;Blades enabled: None (just FW/VPN).&lt;/P&gt;&lt;P&gt;MQ is enabled on two 10g interfaces. The 4 CPU cores tied to these interfaces are running 75-85%, spikes up to 95%. One core is tied with fwd. The other 3 SND's are running 1-2%. Workers are running around 50%.&lt;/P&gt;&lt;P&gt;From Cpview:&lt;BR /&gt;Bandwidth 4-5 Gbps&lt;BR /&gt;800k-900K packets/sec, 10K conn/sec.&lt;BR /&gt;Netstat -ni is NOT showing any drops.&lt;/P&gt;&lt;P&gt;[Expert@13800:0]# fwaccel stats -s&lt;BR /&gt;Accelerated conns/Total conns : (-3%)&lt;BR /&gt;Accelerated pkts/Total pkts : (51%)&lt;BR /&gt;F2Fed pkts/Total pkts : (4%)&lt;BR /&gt;F2V pkts/Total pkts : (1%)&lt;BR /&gt;CPASXL pkts/Total pkts : (0%)&lt;BR /&gt;PSLXL pkts/Total pkts : (&lt;STRONG&gt;44%&lt;/STRONG&gt;)&lt;/P&gt;&lt;P&gt;Question: what could be a reason for 44% PSLXL pkts/Total pkts?&lt;BR /&gt;What can be done to reduce load on the first 4 cores?&lt;/P&gt;</description>
      <pubDate>Fri, 15 Nov 2019 13:55:41 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Firewall-and-Security-Management/High-CPU-on-Multi-Queue-Cores/m-p/67532#M5158</guid>
      <dc:creator>Muazzam</dc:creator>
      <dc:date>2019-11-15T13:55:41Z</dc:date>
    </item>
    <item>
      <title>Re: High CPU on Multi Queue Cores</title>
      <link>https://community.checkpoint.com/t5/Firewall-and-Security-Management/High-CPU-on-Multi-Queue-Cores/m-p/67535#M5159</link>
      <description>&lt;P&gt;&lt;SPAN&gt;&amp;gt;&amp;gt;&amp;gt; Question: what could be a reason for 44% PSLXL pkts/Total pkts?&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;PSLXL is the SecureXL medium path.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;Medium path&lt;/EM&gt;&lt;/STRONG&gt; (&lt;STRONG&gt;PXL&lt;/STRONG&gt;) - The CoreXL layer passes the packet to one of the CoreXL FW instances to perform the processing (even when CoreXL is disabled, the CoreXL infrastructure is used by SecureXL device to send the packet to the single FW instance that still functions). When Medium Path is available, TCP handshake is fully accelerated with SecureXL. Rulebase match is achieved for the first packet through an existing connection acceleration template. SYN-ACK and ACK packets are also fully accelerated. However, once data starts flowing, to stream it for Content Inspection, the packets will be now handled by a FWK instance. Any packets containing data will be sent to FWK for data extraction to build the data stream. RST, FIN and FIN-ACK packets once again are only handled by SecureXL as they do not contain any data that needs to be streamed. This path is available only when CoreXL is enabled.&lt;/P&gt;
&lt;P&gt;Packet flow when the packet is handled by the SecureXL device, except for:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;IPS (some protections)&lt;/LI&gt;
&lt;LI&gt;VPN (in some configurations)&lt;/LI&gt;
&lt;LI&gt;Application Control&lt;/LI&gt;
&lt;LI&gt;Content Awareness&lt;/LI&gt;
&lt;LI&gt;Anti-Virus&lt;/LI&gt;
&lt;LI&gt;Anti-Bot&lt;/LI&gt;
&lt;LI&gt;HTTPS Inspection&lt;/LI&gt;
&lt;LI&gt;Proxy mode&lt;/LI&gt;
&lt;LI&gt;Mobile Access&lt;/LI&gt;
&lt;LI&gt;VoIP&lt;/LI&gt;
&lt;LI&gt;Web Portals.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;EM&gt;&lt;STRONG&gt;PXL&lt;/STRONG&gt;&lt;/EM&gt; vs. &lt;EM&gt;&lt;STRONG&gt;PSLXL&lt;/STRONG&gt;&lt;/EM&gt; - Technology name for combination of SecureXL and PSL. PXL was renamed to PSLXL in R80.20.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="font-family: inherit; -webkit-tap-highlight-color: transparent; -webkit-text-size-adjust: 100%;"&gt;&amp;gt;&amp;gt;&amp;gt;What can be done to reduce load on the first 4 cores?&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;This is normal for MQ cores on high packet rate.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 15 Nov 2019 17:28:20 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Firewall-and-Security-Management/High-CPU-on-Multi-Queue-Cores/m-p/67535#M5159</guid>
      <dc:creator>HeikoAnkenbrand</dc:creator>
      <dc:date>2019-11-15T17:28:20Z</dc:date>
    </item>
    <item>
      <title>Re: High CPU on Multi Queue Cores</title>
      <link>https://community.checkpoint.com/t5/Firewall-and-Security-Management/High-CPU-on-Multi-Queue-Cores/m-p/67537#M5160</link>
      <description>&lt;P&gt;More read here in my article:&lt;/P&gt;
&lt;P&gt;&lt;A href="https://community.checkpoint.com/docs/DOC-3041-r80x-security-gateway-architecture-logical-packet-flow" target="_blank" rel="noopener"&gt; R80.x - Security Gateway Architecture (Logical Packet Flow)&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;PSL F2F and PSLXL medium path:&lt;/P&gt;
&lt;P&gt;&lt;A href="https://community.checkpoint.com/docs/DOC-3073-r80x-security-gateway-architecture-content-inspection" target="_blank" rel="noopener"&gt;R80.x - Security Gateway Architecture (Content Inspection)&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;MQ:&lt;/P&gt;
&lt;P&gt;&lt;A href="https://community.checkpoint.com/docs/DOC-3352-r80x-performance-tuning-tip-multi-queue" target="_blank" rel="noopener"&gt;R80.x - Performance Tuning Tip - Multi Queue&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 15 Nov 2019 14:32:13 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Firewall-and-Security-Management/High-CPU-on-Multi-Queue-Cores/m-p/67537#M5160</guid>
      <dc:creator>HeikoAnkenbrand</dc:creator>
      <dc:date>2019-11-15T14:32:13Z</dc:date>
    </item>
    <item>
      <title>Re: High CPU on Multi Queue Cores</title>
      <link>https://community.checkpoint.com/t5/Firewall-and-Security-Management/High-CPU-on-Multi-Queue-Cores/m-p/67555#M5162</link>
      <description>&lt;P&gt;There should be more than 4 queues assigned to those 10Gbps interface by Multi-Queue with an 8/12 split, unless those interfaces are using the igb driver (&lt;STRONG&gt;ethtool -i&lt;/STRONG&gt; to check) which is limited to a maximum of 4 queues due to a driver limitation.&amp;nbsp; If this is the case there is nothing you can do about it, other than using a different NIC that supports the ixgbe driver which can have up to 16 queues.&amp;nbsp; If you are in fact using ixgbe, see&amp;nbsp;&lt;A class="cp_link sc_ellipsis" href="https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails=&amp;amp;solutionid=sk154392&amp;amp;partition=Advanced&amp;amp;product=Security" target="_blank" rel="noopener"&gt;sk154392: An available CPU Core is not handling any queue, when using Multi-Q.&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;It is also possible that your manual process affinity for the fwd daemon is interfering with the assignment of more SND/IRQ cores for traffic processing with Multi-Queue.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As far as why PSLXL is 44% with so few blades enabled, this is probably due to the presence of microsoft-ds traffic (port 445) which by default will be sent to PSLXL.&amp;nbsp; You can confirm by running &lt;STRONG&gt;fwaccel conns | grep 445&lt;/STRONG&gt;.&amp;nbsp; If you see a s/S flag for those connections that indicates the connection is going Medium Path.&amp;nbsp; Also look for other connections that have the s/S flag in the output of &lt;STRONG&gt;fwaccel conns&lt;/STRONG&gt; for clues.&lt;/P&gt;
&lt;P&gt;As far as what you can do about this, if you upgrade to R80.20 Jumbo HFA 103 or later you can force this traffic into the Accelerated Path with &lt;STRONG&gt;fast_accel&lt;/STRONG&gt; as discussed here:&amp;nbsp;&lt;A class="cp_link sc_ellipsis" href="https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails=&amp;amp;solutionid=sk156672&amp;amp;partition=Advanced&amp;amp;product=SecureXL%22" target="_blank" rel="noopener"&gt;sk156672: SecureXL Fast Accelerator (fw &lt;STRONG&gt;fast_accel&lt;/STRONG&gt;) for R80.20 and above&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Be warned however that doing this will "fathpath" the traffic with a minimum of inspection, and bad things security-wise can happen if a threat or other security problem happens in the fastpath'ed connections.&amp;nbsp; Also keep in mind that the load will increase on your SND/IRQ cores as this traffic is forced off the Firewall Workers into SecureXL; you may want to figure out what is going on with Multi-Queue not using all available SND/IRQ cores first.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sat, 16 Nov 2019 12:57:10 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Firewall-and-Security-Management/High-CPU-on-Multi-Queue-Cores/m-p/67555#M5162</guid>
      <dc:creator>Timothy_Hall</dc:creator>
      <dc:date>2019-11-16T12:57:10Z</dc:date>
    </item>
    <item>
      <title>Re: High CPU on Multi Queue Cores</title>
      <link>https://community.checkpoint.com/t5/Firewall-and-Security-Management/High-CPU-on-Multi-Queue-Cores/m-p/74265#M5711</link>
      <description>&lt;P&gt;Just want to share the results:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;After upgrading the gateway to R80.20 T103, I do not see high value of PXL. This was definitely an issue with T47 of R80.20.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;[Expert@13800]# fwaccel stats -s&lt;BR /&gt;Accelerated conns/Total conns : 680725/680725 (100%)&lt;BR /&gt;Accelerated pkts/Total pkts : 226544028526/238077462224 (95%)&lt;BR /&gt;F2Fed pkts/Total pkts : 11533433698/238077462224 (4%)&lt;BR /&gt;F2V pkts/Total pkts : 5507491567/238077462224 (2%)&lt;BR /&gt;CPASXL pkts/Total pkts : 0/238077462224 (0%)&lt;BR /&gt;PSLXL pkts/Total pkts : 48307/238077462224 (0%)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 05 Feb 2020 19:53:25 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Firewall-and-Security-Management/High-CPU-on-Multi-Queue-Cores/m-p/74265#M5711</guid>
      <dc:creator>Muazzam</dc:creator>
      <dc:date>2020-02-05T19:53:25Z</dc:date>
    </item>
    <item>
      <title>Re: High CPU on Multi Queue Cores</title>
      <link>https://community.checkpoint.com/t5/Firewall-and-Security-Management/High-CPU-on-Multi-Queue-Cores/m-p/74549#M5734</link>
      <description>&lt;P&gt;Depending on which blades are enabled, the drop in PSLXL path traffic after Jumbo HFA installation may be related to the fix here:&lt;/P&gt;
&lt;P&gt;&lt;A href="https://community.checkpoint.com/t5/General-Topics/First-impressions-R80-30-on-gateway-one-step-forward-one-or-two/m-p/72593" target="_blank"&gt;https://community.checkpoint.com/t5/General-Topics/First-impressions-R80-30-on-gateway-one-step-forward-one-or-two/m-p/72593&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sun, 09 Feb 2020 15:11:19 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Firewall-and-Security-Management/High-CPU-on-Multi-Queue-Cores/m-p/74549#M5734</guid>
      <dc:creator>Timothy_Hall</dc:creator>
      <dc:date>2020-02-09T15:11:19Z</dc:date>
    </item>
  </channel>
</rss>

