<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Multi-Queue and LACP configuration in General Topics</title>
    <link>https://community.checkpoint.com/t5/General-Topics/Multi-Queue-and-LACP-configuration/m-p/47768#M9313</link>
    <description>&lt;P&gt;The recommendation in the ClusterXL guide to use static interface affinities is outdated.&amp;nbsp; It assumes that SecureXL is disabled (and thus automatic interface affinity is not active at all) or that automatic interface affinity does not do a good job of balancing traffic among the interfaces.&amp;nbsp; This latter assumption was definitely the case in R76 and earlier, but automatic interface affinity was substantially improved in R77+ and I have not needed to set static interface affinities for quite a long time.&lt;/P&gt;
&lt;P&gt;Multi-Queue does not directly care about bond/aggregate interfaces, it is simply enabled on the underlying physical interfaces.&amp;nbsp; MQ simply allows all SND/IRQ cores (up to certain limits) to have their own queues for an enabled interface that they empty independently.&amp;nbsp; The packets associated with a single connection are always "stuck" to the same queue/core every time to avoid out of order delivery, and I assume there is some kind of balancing performed for new connections among the queues for a particular interface.&amp;nbsp; You would most definitely NOT want any kind of static interface affinities defined on an interface with Multi-Queue enabled, as doing so would interfere with the Multi-Queue sticking/balancing mechanism.&amp;nbsp; The likely result would be overloading of individual SND/IRQ cores, and even possibly out-of-order packet delivery which is very undesirable.&lt;/P&gt;</description>
    <pubDate>Tue, 19 Mar 2019 15:47:36 GMT</pubDate>
    <dc:creator>Timothy_Hall</dc:creator>
    <dc:date>2019-03-19T15:47:36Z</dc:date>
    <item>
      <title>Multi-Queue and LACP configuration</title>
      <link>https://community.checkpoint.com/t5/General-Topics/Multi-Queue-and-LACP-configuration/m-p/47742#M9306</link>
      <description>&lt;P&gt;In the ClusterXL Admin Guide it states when utilizing Link Aggregation "To get the best performance, use static affinity for Link Aggregation", where it shows and recommends examples where you set the affinities for the bond interfaces to different cores.&amp;nbsp; This makes sense to&amp;nbsp; me as you would not want a LACP bond to have the slave interfaces&amp;nbsp;pinned to the same&amp;nbsp;cpu core.&amp;nbsp; Example below:&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="lacp_affinity.png" style="width: 646px;"&gt;&lt;img src="https://community.checkpoint.com/t5/image/serverpage/image-id/185i8046A39E30120BEA/image-size/large?v=v2&amp;amp;px=999" role="button" title="lacp_affinity.png" alt="lacp_affinity.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;However with Multi-Queue various documentation states &lt;STRONG&gt;NOT&lt;/STRONG&gt; to manually set affinities as it will cause performances issues.&amp;nbsp;&lt;/P&gt;&lt;P&gt;If this is the case, is it safe to have Multi-Queue enabled on 10gb interfaces that are a part of a lacp&amp;nbsp;bond where the queue map to the same CPU cores?&amp;nbsp;&amp;nbsp;Specifically I have&amp;nbsp;two&amp;nbsp;LACP bond interfaces consisting of 2 10gb interfaces with Multi-Queue enable&amp;nbsp;on all four 10gb interfaces.&amp;nbsp; Bond to Interface to CPU mapping below:&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="BondPortCPUMapping.png" style="width: 322px;"&gt;&lt;img src="https://community.checkpoint.com/t5/image/serverpage/image-id/268i33428CC9267E37FA/image-size/large?v=v2&amp;amp;px=999" role="button" title="BondPortCPUMapping.png" alt="BondPortCPUMapping.png" /&gt;&lt;/span&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 19 Mar 2019 14:40:21 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/Multi-Queue-and-LACP-configuration/m-p/47742#M9306</guid>
      <dc:creator>Josh_Smith</dc:creator>
      <dc:date>2019-03-19T14:40:21Z</dc:date>
    </item>
    <item>
      <title>Re: Multi-Queue and LACP configuration</title>
      <link>https://community.checkpoint.com/t5/General-Topics/Multi-Queue-and-LACP-configuration/m-p/47768#M9313</link>
      <description>&lt;P&gt;The recommendation in the ClusterXL guide to use static interface affinities is outdated.&amp;nbsp; It assumes that SecureXL is disabled (and thus automatic interface affinity is not active at all) or that automatic interface affinity does not do a good job of balancing traffic among the interfaces.&amp;nbsp; This latter assumption was definitely the case in R76 and earlier, but automatic interface affinity was substantially improved in R77+ and I have not needed to set static interface affinities for quite a long time.&lt;/P&gt;
&lt;P&gt;Multi-Queue does not directly care about bond/aggregate interfaces, it is simply enabled on the underlying physical interfaces.&amp;nbsp; MQ simply allows all SND/IRQ cores (up to certain limits) to have their own queues for an enabled interface that they empty independently.&amp;nbsp; The packets associated with a single connection are always "stuck" to the same queue/core every time to avoid out of order delivery, and I assume there is some kind of balancing performed for new connections among the queues for a particular interface.&amp;nbsp; You would most definitely NOT want any kind of static interface affinities defined on an interface with Multi-Queue enabled, as doing so would interfere with the Multi-Queue sticking/balancing mechanism.&amp;nbsp; The likely result would be overloading of individual SND/IRQ cores, and even possibly out-of-order packet delivery which is very undesirable.&lt;/P&gt;</description>
      <pubDate>Tue, 19 Mar 2019 15:47:36 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/Multi-Queue-and-LACP-configuration/m-p/47768#M9313</guid>
      <dc:creator>Timothy_Hall</dc:creator>
      <dc:date>2019-03-19T15:47:36Z</dc:date>
    </item>
    <item>
      <title>Re: Multi-Queue and LACP configuration</title>
      <link>https://community.checkpoint.com/t5/General-Topics/Multi-Queue-and-LACP-configuration/m-p/47771#M9314</link>
      <description>&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE style="border: 1px solid #c6c6c6; border-collapse: separate; border-radius: 5px; background-color: #e15180; padding: 6px; text-indent: 10px;" width="100%"&gt;
&lt;THEAD&gt;
&lt;TR&gt;
&lt;TH align="left"&gt;&lt;FONT size="4" color="#ffffff"&gt;What is Multi Queue?&lt;/FONT&gt;&lt;/TH&gt;
&lt;/TR&gt;
&lt;/THEAD&gt;
&lt;/TABLE&gt;
&lt;P style="margin-bottom: .0001pt;"&gt;It is an acceleration feature that lets you assign more than one packet queue and CPU to an interface.&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="font-size: 11.0pt;"&gt;When most of the traffic is accelerated by the SecureXL, the CPU load from the CoreXL SND instances can be very high, while the CPU load from the CoreXL FW instances can be very low. This is an inefficient utilization of CPU capacity.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="font-size: 11.0pt;"&gt;By default, the number of CPU cores allocated to CoreXL SND instances is limited by the number of network interfaces that handle the traffic. Because each interface has one traffic queue, only one CPU core can handle each traffic queue at a time. This means that each CoreXL SND instance can use only one CPU core at a time for each network interface.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="font-size: 11.0pt;"&gt;Check Point Multi-Queue lets you configure more than one traffic queue for each network interface. For each interface, you can use more than one CPU core (that runs CoreXL SND) for traffic acceleration. This balances the load efficiently between the CPU cores that run the CoreXL SND instances and the CPU cores that run CoreXL FW instances.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="font-size: 11.0pt;"&gt;&lt;STRONG&gt;Important -&lt;/STRONG&gt;&lt;/SPAN&gt; &lt;SPAN style="font-size: 11.0pt;"&gt;Multi-Queue applies only if SecureXL is enabled.&lt;/SPAN&gt;&lt;/P&gt;
&lt;TABLE style="border: 1px solid #c6c6c6; border-collapse: separate; border-radius: 5px; background-color: #e15180; padding: 6px; text-indent: 10px;" width="100%"&gt;
&lt;THEAD&gt;
&lt;TR&gt;
&lt;TH align="left"&gt;&lt;FONT size="4" color="#ffffff"&gt;Multi-Queue Requirements and Limitations&lt;/FONT&gt;&lt;/TH&gt;
&lt;/TR&gt;
&lt;/THEAD&gt;
&lt;/TABLE&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN style="font-size: 15px;"&gt;Multi-Queue is not supported on computers with one CPU core.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN style="font-size: 15px;"&gt;Network interfaces must use the driver that supports Multi-Queue. Only network cards that use the &lt;STRONG&gt;igb&lt;/STRONG&gt; (1Gb), &lt;STRONG&gt;ixgbe&lt;/STRONG&gt; (10Gb), &lt;STRONG&gt;i40e&lt;/STRONG&gt; (40Gb), or &lt;STRONG&gt;mlx5_core&lt;/STRONG&gt; (40Gb) drivers support the Multi-Queue.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN style="font-size: 15px;"&gt;You can configure a maximum of five interfaces with Multi-Queue.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN style="font-size: 15px;"&gt;You must reboot the Security Gateway after all changes in the Multi-Queue configuration.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI style="text-indent: -18.0pt;"&gt;&lt;SPAN style="font-size: 15px;"&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; For best performance, it is &lt;STRONG&gt;&lt;EM&gt;not&lt;/EM&gt;&lt;/STRONG&gt; recommended to assign both SND and a CoreXL FW instance to the same CPU core.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN style="font-size: 15px;"&gt;Do &lt;STRONG&gt;&lt;EM&gt;not&lt;/EM&gt;&lt;/STRONG&gt; change the IRQ affinity of queues manually. Changing the IRQ affinity of the queues manually can adversely affect performance.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN style="font-size: 15px;"&gt;Multi-Queue is relevant &lt;STRONG&gt;&lt;EM&gt;only&lt;/EM&gt;&lt;/STRONG&gt; if SecureXL and CoreXL is enabled.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN style="font-size: 15px;"&gt;Do &lt;STRONG&gt;&lt;EM&gt;not&lt;/EM&gt;&lt;/STRONG&gt; change the IRQ affinity of queues manually. Changing the IRQ affinity of the queues manually can adversely affect performance.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN style="font-size: 15px;"&gt;You cannot use the “sim affinity” or the &amp;nbsp;“fw ctl affinity” commands to change and query the IRQ affinity of the Multi-Queue interfaces.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN style="font-size: 15px;"&gt;The number of queues is limited by the number of CPU cores and the type of interface driver:&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;TABLE style="margin-left: 40.85pt; border: none;"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD style="border: solid windowtext 1.0pt; background: black; padding: 0cm 5.4pt 0cm 5.4pt;" width="151"&gt;
&lt;P&gt;&lt;SPAN style="color: white;"&gt;Network card driver&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD style="border: solid windowtext 1.0pt; border-left: none; background: black; padding: 0cm 5.4pt 0cm 5.4pt;" width="113"&gt;
&lt;P&gt;&lt;SPAN style="color: white;"&gt;Speed&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD style="border: solid windowtext 1.0pt; border-left: none; background: black; padding: 0cm 5.4pt 0cm 5.4pt;" width="255"&gt;
&lt;P&gt;&lt;SPAN style="color: white;"&gt;Maximal number of RX queues&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD style="border: solid windowtext 1.0pt; border-top: none; padding: 0cm 5.4pt 0cm 5.4pt;" width="151"&gt;
&lt;P&gt;igb&lt;/P&gt;
&lt;/TD&gt;
&lt;TD style="border-top: none; border-left: none; border-bottom: solid windowtext 1.0pt; border-right: solid windowtext 1.0pt; padding: 0cm 5.4pt 0cm 5.4pt;" width="113"&gt;
&lt;P&gt;1 Gb&lt;/P&gt;
&lt;/TD&gt;
&lt;TD style="border-top: none; border-left: none; border-bottom: solid windowtext 1.0pt; border-right: solid windowtext 1.0pt; padding: 0cm 5.4pt 0cm 5.4pt;" width="255"&gt;
&lt;P&gt;4&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD style="border: solid windowtext 1.0pt; border-top: none; padding: 0cm 5.4pt 0cm 5.4pt;" width="151"&gt;
&lt;P&gt;ixgbe&lt;/P&gt;
&lt;/TD&gt;
&lt;TD style="border-top: none; border-left: none; border-bottom: solid windowtext 1.0pt; border-right: solid windowtext 1.0pt; padding: 0cm 5.4pt 0cm 5.4pt;" width="113"&gt;
&lt;P&gt;10 Gb&lt;/P&gt;
&lt;/TD&gt;
&lt;TD style="border-top: none; border-left: none; border-bottom: solid windowtext 1.0pt; border-right: solid windowtext 1.0pt; padding: 0cm 5.4pt 0cm 5.4pt;" width="255"&gt;
&lt;P&gt;16&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD style="border: solid windowtext 1.0pt; border-top: none; padding: 0cm 5.4pt 0cm 5.4pt;" width="151"&gt;
&lt;P&gt;i40e&lt;/P&gt;
&lt;/TD&gt;
&lt;TD style="border-top: none; border-left: none; border-bottom: solid windowtext 1.0pt; border-right: solid windowtext 1.0pt; padding: 0cm 5.4pt 0cm 5.4pt;" width="113"&gt;
&lt;P&gt;40 Gb&lt;/P&gt;
&lt;/TD&gt;
&lt;TD style="border-top: none; border-left: none; border-bottom: solid windowtext 1.0pt; border-right: solid windowtext 1.0pt; padding: 0cm 5.4pt 0cm 5.4pt;" width="255"&gt;
&lt;P&gt;14&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD style="border: solid windowtext 1.0pt; border-top: none; padding: 0cm 5.4pt 0cm 5.4pt;" width="151"&gt;
&lt;P&gt;mlx5_core&lt;/P&gt;
&lt;/TD&gt;
&lt;TD style="border-top: none; border-left: none; border-bottom: solid windowtext 1.0pt; border-right: solid windowtext 1.0pt; padding: 0cm 5.4pt 0cm 5.4pt;" width="113"&gt;
&lt;P&gt;40 Gb&lt;/P&gt;
&lt;/TD&gt;
&lt;TD style="border-top: none; border-left: none; border-bottom: solid windowtext 1.0pt; border-right: solid windowtext 1.0pt; padding: 0cm 5.4pt 0cm 5.4pt;" width="255"&gt;
&lt;P&gt;10&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;UL&gt;
&lt;LI style="text-indent: -18.0pt;"&gt;&lt;SPAN style="font-size: 12.0pt;"&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/SPAN&gt; &lt;SPAN style="font-size: 15px;"&gt;The maximum RX queues limit dictates the largest number of SND/IRQ instances that can empty packet buffers for an individual interface using that driver that has Multi-Queue enabled.&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN style="font-size: 15px;"&gt;Multi-Queue does not work on 3200 / 5000 / 15000 / 23000 appliances in the following scenario (sk114625)&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Tue, 19 Mar 2019 16:43:39 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/Multi-Queue-and-LACP-configuration/m-p/47771#M9314</guid>
      <dc:creator>HeikoAnkenbrand</dc:creator>
      <dc:date>2019-03-19T16:43:39Z</dc:date>
    </item>
    <item>
      <title>Re: Multi-Queue and LACP configuration</title>
      <link>https://community.checkpoint.com/t5/General-Topics/Multi-Queue-and-LACP-configuration/m-p/47772#M9315</link>
      <description>&lt;P&gt;More informations about Multi-Queue you found here:&lt;/P&gt;
&lt;P&gt;&lt;A href="https://community.checkpoint.com/t5/Enterprise-Appliances-and-Gaia/R80-x-Performance-Tuning-Tip-Multi-Queue/td-p/41608" target="_self"&gt;R80.x Performance Tuning Tip – Multi Queue&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 19 Mar 2019 16:43:48 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/Multi-Queue-and-LACP-configuration/m-p/47772#M9315</guid>
      <dc:creator>HeikoAnkenbrand</dc:creator>
      <dc:date>2019-03-19T16:43:48Z</dc:date>
    </item>
  </channel>
</rss>

