<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Bad performance of 23800 firewall in Firewall and Security Management</title>
    <link>https://community.checkpoint.com/t5/Firewall-and-Security-Management/Bad-performance-of-23800-firewall/m-p/99564#M10391</link>
    <description>&lt;P&gt;Hi &lt;a href="https://community.checkpoint.com/t5/user/viewprofilepage/user-id/15508"&gt;@Pawan_Shukla&lt;/a&gt;,&lt;/P&gt;
&lt;P&gt;I agree with&amp;nbsp;&lt;a href="https://community.checkpoint.com/t5/user/viewprofilepage/user-id/11456"&gt;@Kaspars_Zibarts&lt;/a&gt;.&lt;/P&gt;
&lt;P&gt;Here's what I would do in this case:&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;FONT color="#3366FF"&gt;First&lt;/FONT&gt;&lt;/STRONG&gt; enable MQ:&lt;BR /&gt;By default, each network interface has one traffic queue handled by one CPU. You cannot use more CPU cores for acceleration than the number of interfaces handling traffic. Multi-Queue lets you configure more than one traffic queue for each network interface. For each interface, more than one CPU core is used for acceleration. Multi-Queue is relevant only if SecureXL is enabled.&lt;/P&gt;
&lt;P&gt;More read here:&lt;BR /&gt;&lt;A href="https://community.checkpoint.com/docs/DOC-3352-r80x-performance-tuning-tip-multi-queue" target="_blank" rel="noopener"&gt;- R80.x - Performance Tuning Tip - Multi Queue&lt;/A&gt;&lt;BR /&gt;&lt;A href="https://sc1.checkpoint.com/documents/R80.30/WebAdminGuides/EN/CP_R80.30_PerformanceTuning_AdminGuide/html_frameset.htm?topic=documents/R80.30/WebAdminGuides/EN/CP_R80.30_PerformanceTuning_AdminGuide/100935" target="_self" rel="noopener noreferrer"&gt;- Performance Tuning R80.30 Administration Guide – Multi-Queue&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Ennable MQ in this interfaces:&lt;/P&gt;
&lt;P&gt;eth1-01: CPU 2&lt;BR /&gt;eth1-02: CPU 3&lt;BR /&gt;eth1-03: CPU 24&lt;BR /&gt;eth1-04: CPU 25&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;FONT color="#3366FF"&gt;Second:&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Use "Fast Accel Rules" for internal networks. So you have less traffic on the PSLXL path or use IPS only for traffic to and from the internet on the external interface. &lt;BR /&gt;&lt;BR /&gt;More read here:&lt;BR /&gt;&lt;A href="https://community.checkpoint.com/t5/General-Topics/R80-x-Performance-Tuning-Tip-SecureXL-Fast-Accelerator-fw-ctl/td-p/67604" target="_blank" rel="noopener"&gt;- R80.x - Performance Tuning Tip - SecureXL Fast Accelerator (fw ctl fast_accel)&lt;/A&gt;&lt;BR /&gt;&lt;A href="https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails=&amp;amp;solutionid=sk156672&amp;amp;partition=Advanced&amp;amp;product=SecureXL%22" target="_self" rel="noopener noreferrer noopener noreferrer"&gt;- sk156672 - SecureXL Fast Accelerator (fw fast_accel) for R80.20 and above&lt;/A&gt;&lt;A href="https://sc1.checkpoint.com/documents/R80.30/WebAdminGuides/EN/CP_R80.30_PerformanceTuning_AdminGuide/html_frameset.htm?topic=documents/R80.30/WebAdminGuides/EN/CP_R80.30_PerformanceTuning_AdminGuide/100935" target="_self" rel="noopener noreferrer"&gt;&lt;BR /&gt;&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;FONT color="#3366FF"&gt;Third:&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Check RX errors:&lt;BR /&gt;# netstat -in&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;RX-ERR:&lt;/STRONG&gt; Should be zero.&amp;nbsp; Caused by cabling problem, electrical interference, or a bad port.&amp;nbsp; Examples: framing errors, short frames/runts, late collisions caused by duplex mismatch.&lt;BR /&gt;&lt;FONT color="#008080"&gt;&lt;STRONG&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Tip:&lt;/STRONG&gt;&lt;/FONT&gt;&amp;nbsp; First and easy check duplex mismatch&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt;RX-OVR:&lt;/STRONG&gt; Should be zero.&amp;nbsp; Overrun in NIC hardware buffering.&amp;nbsp; Solved by using a higher-speed NIC, bonding multiple interfaces, or enabling Ethernet Flow Control (controversial).&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT color="#008080"&gt;&lt;STRONG&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Tip:&lt;/STRONG&gt;&lt;/FONT&gt;&amp;nbsp; Use higher speed NIC's or bond interfaces&lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt;RX-DRP:&lt;/STRONG&gt; Should be less than 0.1% of RX-OK.&amp;nbsp; Caused by a network ring buffer overflow in the Gaia kernel due to the inability of SoftIRQ to empty the ring buffer fast enough.&amp;nbsp; Solved by allocating more SND/IRQ cores in CoreXL (always the first step), enabling Multi-Queue, or as a last resort increasing the ring buffer size.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;More tips read here:&lt;BR /&gt;&lt;A href="https://community.checkpoint.com/t5/General-Topics/R80-x-Architecture-and-Performance-Tuning-Link-Collection/m-p/47883#M9336" target="_self"&gt;- R80.x Architecture and Performance Tuning - Link Collection &lt;/A&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Tue, 20 Oct 2020 10:11:59 GMT</pubDate>
    <dc:creator>HeikoAnkenbrand</dc:creator>
    <dc:date>2020-10-20T10:11:59Z</dc:date>
    <item>
      <title>Bad performance of 23800 firewall</title>
      <link>https://community.checkpoint.com/t5/Firewall-and-Security-Management/Bad-performance-of-23800-firewall/m-p/99351#M10383</link>
      <description>&lt;P&gt;Hello Experts,&lt;/P&gt;&lt;P&gt;The 23800 firewall is not providing expected performance. It is giving maximum of 7 Gbps but as per datasheet it should give 20 Gbps of throughput with Firewall+IPS blade.&amp;nbsp;&lt;/P&gt;&lt;P&gt;We can taken 4*10Gbps of link and created the bond interface&amp;nbsp; and within the bond interface multiple subinteface are created. When traffic is flowing from one subinteface to another subinteface&amp;nbsp; the max&amp;nbsp; throughput is 7Gbps.&lt;/P&gt;&lt;P&gt;Can you please help in solving the issue.&lt;/P&gt;&lt;P&gt;output of&amp;nbsp;fw ctl affinity -l&lt;/P&gt;&lt;P&gt;Mgmt: CPU 0&lt;BR /&gt;Sync: CPU 1&lt;BR /&gt;eth1-01: CPU 2&lt;BR /&gt;eth1-02: CPU 3&lt;BR /&gt;eth1-03: CPU 24&lt;BR /&gt;eth1-04: CPU 25&lt;BR /&gt;Kernel fw_0: CPU 47&lt;BR /&gt;Kernel fw_1: CPU 23&lt;BR /&gt;Kernel fw_2: CPU 46&lt;BR /&gt;Kernel fw_3: CPU 22&lt;BR /&gt;Kernel fw_4: CPU 45&lt;BR /&gt;Kernel fw_5: CPU 21&lt;BR /&gt;Kernel fw_6: CPU 44&lt;BR /&gt;Kernel fw_7: CPU 20&lt;BR /&gt;Kernel fw_8: CPU 43&lt;BR /&gt;Kernel fw_9: CPU 19&lt;BR /&gt;Kernel fw_10: CPU 42&lt;BR /&gt;Kernel fw_11: CPU 18&lt;BR /&gt;Kernel fw_12: CPU 41&lt;BR /&gt;Kernel fw_13: CPU 17&lt;BR /&gt;Kernel fw_14: CPU 40&lt;BR /&gt;Kernel fw_15: CPU 16&lt;BR /&gt;Kernel fw_16: CPU 39&lt;BR /&gt;Kernel fw_17: CPU 15&lt;BR /&gt;Kernel fw_18: CPU 38&lt;BR /&gt;Kernel fw_19: CPU 14&lt;BR /&gt;Kernel fw_20: CPU 37&lt;BR /&gt;Kernel fw_21: CPU 13&lt;BR /&gt;Kernel fw_22: CPU 36&lt;BR /&gt;Kernel fw_23: CPU 12&lt;BR /&gt;Kernel fw_24: CPU 35&lt;BR /&gt;Kernel fw_25: CPU 11&lt;BR /&gt;Kernel fw_26: CPU 34&lt;BR /&gt;Kernel fw_27: CPU 10&lt;BR /&gt;Kernel fw_28: CPU 33&lt;BR /&gt;Kernel fw_29: CPU 9&lt;BR /&gt;Kernel fw_30: CPU 32&lt;BR /&gt;Kernel fw_31: CPU 8&lt;BR /&gt;Kernel fw_32: CPU 31&lt;BR /&gt;Kernel fw_33: CPU 7&lt;BR /&gt;Kernel fw_34: CPU 30&lt;BR /&gt;Kernel fw_35: CPU 6&lt;BR /&gt;Kernel fw_36: CPU 29&lt;BR /&gt;Kernel fw_37: CPU 5&lt;BR /&gt;Kernel fw_38: CPU 28&lt;BR /&gt;Kernel fw_39: CPU 4&lt;BR /&gt;Daemon in.asessiond: CPU 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47&lt;BR /&gt;Daemon wsdnsd: CPU 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47&lt;BR /&gt;Daemon topod: CPU 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47&lt;BR /&gt;Daemon fwd: CPU 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47&lt;BR /&gt;Daemon in.acapd: CPU 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47&lt;BR /&gt;Daemon lpd: CPU 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47&lt;BR /&gt;Daemon mpdaemon: CPU 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47&lt;BR /&gt;Daemon cpd: CPU 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47&lt;BR /&gt;Daemon cprid: CPU 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47&lt;/P&gt;</description>
      <pubDate>Sat, 17 Oct 2020 12:41:37 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Firewall-and-Security-Management/Bad-performance-of-23800-firewall/m-p/99351#M10383</guid>
      <dc:creator>Pawan_Shukla</dc:creator>
      <dc:date>2020-10-17T12:41:37Z</dc:date>
    </item>
    <item>
      <title>Re: Bad performance of 23800 firewall</title>
      <link>https://community.checkpoint.com/t5/Firewall-and-Security-Management/Bad-performance-of-23800-firewall/m-p/99361#M10384</link>
      <description>&lt;P&gt;What precise version/JHF level?&lt;BR /&gt;What precisely are you doing to test performance?&lt;BR /&gt;What does the test traffic look like?&lt;/P&gt;
&lt;P&gt;If it’s a single flow (one source, one destination), you’re basically creating an elephant flow.&lt;BR /&gt;All of our performance tests involve multiple flows.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;</description>
      <pubDate>Sat, 17 Oct 2020 15:54:54 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Firewall-and-Security-Management/Bad-performance-of-23800-firewall/m-p/99361#M10384</guid>
      <dc:creator>PhoneBoy</dc:creator>
      <dc:date>2020-10-17T15:54:54Z</dc:date>
    </item>
    <item>
      <title>Re: Bad performance of 23800 firewall</title>
      <link>https://community.checkpoint.com/t5/Firewall-and-Security-Management/Bad-performance-of-23800-firewall/m-p/99362#M10385</link>
      <description>&lt;P&gt;Hello Experts,&lt;/P&gt;&lt;P&gt;The 23800 firewall is on the&amp;nbsp; take R80.20 take :173&lt;/P&gt;&lt;P&gt;For testing we are using iperf3 but our requirement is to copy data from one ISILION server to another&lt;/P&gt;&lt;P&gt;We have observed, with single session we are are getting around 5 Gbps for the data transfer and 2 Gbps for other traffic but as soon as we increase number of session data transfer decreases to 2.5 Gbps&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Pawan Shukla&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sat, 17 Oct 2020 16:03:41 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Firewall-and-Security-Management/Bad-performance-of-23800-firewall/m-p/99362#M10385</guid>
      <dc:creator>Pawan_Shukla</dc:creator>
      <dc:date>2020-10-17T16:03:41Z</dc:date>
    </item>
    <item>
      <title>Re: Bad performance of 23800 firewall</title>
      <link>https://community.checkpoint.com/t5/Firewall-and-Security-Management/Bad-performance-of-23800-firewall/m-p/99363#M10386</link>
      <description>&lt;P&gt;1) I would use R80.30 or R80.40 with the latest JHF.&lt;/P&gt;
&lt;P&gt;2) Enable multi queueing with 10 GBit/s interfaces. Without MQ a throughput of 3-5 GBit/s is possible with one interface. More read here: &lt;A href="https://community.checkpoint.com/docs/DOC-3352-r80x-performance-tuning-tip-multi-queue" target="_blank" rel="noopener"&gt;R80.x - Performance Tuning Tip - Multi Queue&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;3) Which paths (more here &lt;A href="https://community.checkpoint.com/docs/DOC-3041-r80x-security-gateway-architecture-logical-packet-flow" target="_blank" rel="noopener"&gt;R80.x - Security Gateway Architecture (Logical Packet Flow)&lt;/A&gt;)&amp;nbsp; do the packages use? Show us the output of the following commands:&lt;/P&gt;
&lt;P&gt;# fwaccel stats -s&lt;/P&gt;
&lt;P&gt;# top&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; -&amp;gt; press 1 for all cores&lt;/P&gt;
&lt;P&gt;4) You use 4 SND's (eth1-01, eth1-02, eth1-03, eth1-04) and 40 CoreXL instances.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp; -&amp;gt; If you strongly use the accelaration path, you should use more SND's&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sat, 17 Oct 2020 17:03:03 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Firewall-and-Security-Management/Bad-performance-of-23800-firewall/m-p/99363#M10386</guid>
      <dc:creator>HeikoAnkenbrand</dc:creator>
      <dc:date>2020-10-17T17:03:03Z</dc:date>
    </item>
    <item>
      <title>Re: Bad performance of 23800 firewall</title>
      <link>https://community.checkpoint.com/t5/Firewall-and-Security-Management/Bad-performance-of-23800-firewall/m-p/99371#M10387</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Kindly find the below details, provide some solution. In my architecture, I have used 4*10Gb fiber interface. So I a&lt;/P&gt;&lt;P&gt;#fwaccel stats -s&lt;/P&gt;&lt;P&gt;Accelerated conns/Total conns : 7026/39238 (17%)&lt;BR /&gt;Accelerated pkts/Total pkts : 34463719650/35911020948 (95%)&lt;BR /&gt;F2Fed pkts/Total pkts : 1447301298/35911020948 (4%)&lt;BR /&gt;F2V pkts/Total pkts : 98202672/35911020948 (0%)&lt;BR /&gt;CPASXL pkts/Total pkts : 0/35911020948 (0%)&lt;BR /&gt;PSLXL pkts/Total pkts : 23921876122/35911020948 (66%)&lt;BR /&gt;CPAS inline pkts/Total pkts : 0/35911020948 (0%)&lt;BR /&gt;PSL inline pkts/Total pkts : 0/35911020948 (0%)&lt;BR /&gt;QOS inbound pkts/Total pkts : 0/35911020948 (0%)&lt;BR /&gt;QOS outbound pkts/Total pkts : 0/35911020948 (0%)&lt;BR /&gt;Corrected pkts/Total pkts : 0/35911020948 (0%)&lt;/P&gt;&lt;P&gt;#top&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;top - 22:28:34 up 3 days, 2:29, 1 user, load average: 2.17, 2.81, 2.76&lt;BR /&gt;Tasks: 615 total, 2 running, 613 sleeping, 0 stopped, 0 zombie&lt;BR /&gt;Cpu(s): 0.1%us, 0.6%sy, 0.0%ni, 93.4%id, 0.0%wa, 0.0%hi, 5.9%si, 0.0%st&lt;BR /&gt;Mem: 65746320k total, 13848148k used, 51898172k free, 329668k buffers&lt;BR /&gt;Swap: 33551672k total, 0k used, 33551672k free, 1946612k cached&lt;/P&gt;&lt;P&gt;PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND&lt;BR /&gt;5095 admin 15 0 0 0 0 S 22 0.0 60:18.99 fw_worker_24&lt;BR /&gt;5077 admin 15 0 0 0 0 S 19 0.0 85:41.38 fw_worker_6&lt;BR /&gt;5108 admin 15 0 0 0 0 S 19 0.0 220:31.43 fw_worker_37&lt;BR /&gt;5092 admin 15 0 0 0 0 S 18 0.0 279:32.86 fw_worker_21&lt;BR /&gt;5107 admin 15 0 0 0 0 S 16 0.0 364:14.04 fw_worker_36&lt;BR /&gt;8856 admin 15 0 862m 184m 39m S 14 0.3 235:20.76 fw_full&lt;BR /&gt;5078 admin 15 0 0 0 0 S 10 0.0 326:44.01 fw_worker_7&lt;BR /&gt;5072 admin 15 0 0 0 0 S 8 0.0 256:41.55 fw_worker_1&lt;BR /&gt;5110 admin 15 0 0 0 0 S 8 0.0 105:03.55 fw_worker_39&lt;BR /&gt;5096 admin 15 0 0 0 0 S 7 0.0 284:17.59 fw_worker_25&lt;BR /&gt;5098 admin 15 0 0 0 0 S 6 0.0 261:08.10 fw_worker_27&lt;BR /&gt;5103 admin 15 0 0 0 0 S 6 0.0 198:56.70 fw_worker_32&lt;BR /&gt;5086 admin 15 0 0 0 0 S 6 0.0 370:28.17 fw_worker_15&lt;BR /&gt;5088 admin 15 0 0 0 0 S 6 0.0 401:48.28 fw_worker_17&lt;BR /&gt;5080 admin 15 0 0 0 0 S 5 0.0 140:41.30 fw_worker_9&lt;BR /&gt;5090 admin 16 0 0 0 0 S 5 0.0 245:51.60 fw_worker_19&lt;BR /&gt;5094 admin 15 0 0 0 0 S 5 0.0 199:11.30 fw_worker_23&lt;BR /&gt;5074 admin 15 0 0 0 0 S 5 0.0 260:30.67 fw_worker_3&lt;BR /&gt;5081 admin 15 0 0 0 0 S 4 0.0 59:37.76 fw_worker_10&lt;BR /&gt;5082 admin 15 0 0 0 0 S 4 0.0 138:25.71 fw_worker_11&lt;BR /&gt;5083 admin 15 0 0 0 0 S 4 0.0 148:08.63 fw_worker_12&lt;BR /&gt;5084 admin 15 0 0 0 0 S 4 0.0 149:37.41 fw_worker_13&lt;BR /&gt;5076 admin 15 0 0 0 0 S 4 0.0 241:35.83 fw_worker_5&lt;BR /&gt;5099 admin 15 0 0 0 0 S 4 0.0 51:30.35 fw_worker_28&lt;BR /&gt;5100 admin 15 0 0 0 0 S 4 0.0 133:52.08 fw_worker_29&lt;BR /&gt;5105 admin 15 0 0 0 0 S 4 0.0 205:21.42 fw_worker_34&lt;BR /&gt;5085 admin 15 0 0 0 0 S 3 0.0 151:12.51 fw_worker_14&lt;BR /&gt;5097 admin 15 0 0 0 0 S 3 0.0 52:58.49 fw_worker_26&lt;BR /&gt;5075 admin 15 0 0 0 0 S 3 0.0 63:02.47 fw_worker_4&lt;BR /&gt;5091 admin 15 0 0 0 0 S 3 0.0 135:33.92 fw_worker_20&lt;BR /&gt;5104 admin 15 0 0 0 0 S 3 0.0 84:22.19 fw_worker_33&lt;BR /&gt;5106 admin 15 0 0 0 0 S 3 0.0 225:49.28 fw_worker_35&lt;BR /&gt;5071 admin 15 0 0 0 0 S 2 0.0 147:40.42 fw_worker_0&lt;BR /&gt;5073 admin 15 0 0 0 0 S 2 0.0 146:38.35 fw_worker_2&lt;BR /&gt;5079 admin 15 0 0 0 0 S 2 0.0 142:34.09 fw_worker_8&lt;BR /&gt;5089 admin 15 0 0 0 0 S 2 0.0 53:42.59 fw_worker_18&lt;BR /&gt;5093 admin 15 0 0 0 0 S 2 0.0 215:51.19 fw_worker_22&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;# cpmq get -a&lt;/P&gt;&lt;P&gt;Active ixgbe interfaces:&lt;BR /&gt;eth1-01 [Off]&lt;BR /&gt;eth1-02 [Off]&lt;BR /&gt;eth1-03 [Off]&lt;BR /&gt;eth1-04 [Off]&lt;/P&gt;&lt;P&gt;Non-Active ixgbe interfaces:&lt;BR /&gt;eth3-01 [Off]&lt;BR /&gt;eth3-02 [Off]&lt;BR /&gt;eth4-01 [Off]&lt;BR /&gt;eth4-02 [Off]&lt;BR /&gt;eth4-03 [Off]&lt;BR /&gt;eth4-04 [Off]&lt;/P&gt;&lt;P&gt;Active igb interfaces:&lt;BR /&gt;Mgmt [Off]&lt;BR /&gt;Sync [Off]&lt;/P&gt;&lt;P&gt;Non-Active igb interfaces:&lt;BR /&gt;eth2-01 [Off]&lt;BR /&gt;eth2-02 [Off]&lt;BR /&gt;eth2-03 [Off]&lt;BR /&gt;eth2-04 [Off]&lt;BR /&gt;eth2-05 [Off]&lt;BR /&gt;eth2-06 [Off]&lt;BR /&gt;eth2-07 [Off]&lt;BR /&gt;eth2-08 [Off]&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Pawan Shukla&lt;/P&gt;</description>
      <pubDate>Sat, 17 Oct 2020 20:34:04 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Firewall-and-Security-Management/Bad-performance-of-23800-firewall/m-p/99371#M10387</guid>
      <dc:creator>Pawan_Shukla</dc:creator>
      <dc:date>2020-10-17T20:34:04Z</dc:date>
    </item>
    <item>
      <title>Re: Bad performance of 23800 firewall</title>
      <link>https://community.checkpoint.com/t5/Firewall-and-Security-Management/Bad-performance-of-23800-firewall/m-p/99372#M10388</link>
      <description>&lt;P&gt;Please provide output of Super Seven commands:&lt;/P&gt;
&lt;P&gt;&lt;A id="link_12" class="page-link lia-link-navigation lia-custom-event" href="https://community.checkpoint.com/t5/General-Topics/Super-Seven-Performance-Assessment-Commands-s7pac/m-p/40528?search-action-id=18251116015&amp;amp;search-result-uid=40528" target="_blank" rel="noopener"&gt;&lt;SPAN class="lia-search-match-lithium"&gt;Super&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class="lia-search-match-lithium"&gt;Seven&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;Performance Assessment Commands (s7pac.&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Depending on your policy if a large percentage of iperf traffic is accelerated you are only using six of the available 48 CPUs.&amp;nbsp; Very likely you have RX-DRP as shown by output of Super Seven which is what is slowing you down.&amp;nbsp; You almost certainly need to adjust your CoreXL split and turn on Multi-Queue.&lt;/P&gt;</description>
      <pubDate>Sat, 17 Oct 2020 22:27:20 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Firewall-and-Security-Management/Bad-performance-of-23800-firewall/m-p/99372#M10388</guid>
      <dc:creator>Timothy_Hall</dc:creator>
      <dc:date>2020-10-17T22:27:20Z</dc:date>
    </item>
    <item>
      <title>Re: Bad performance of 23800 firewall</title>
      <link>https://community.checkpoint.com/t5/Firewall-and-Security-Management/Bad-performance-of-23800-firewall/m-p/99484#M10389</link>
      <description>&lt;P&gt;For a single connection, 5Gbps is rather good, considering you are using 10G interfaces in the first place.&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;You can increase connection's throughput it by applying FAST_ACCEL bypass for the specific IP addresses on both sides, as described in&amp;nbsp;&lt;SPAN&gt;sk156672.&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;As&amp;nbsp;&lt;a href="https://community.checkpoint.com/t5/user/viewprofilepage/user-id/7"&gt;@PhoneBoy&lt;/a&gt;&amp;nbsp;said already, this is the case of a heavy connection, a.k.a. "elephan flow", which cannot be addressed by balancing FW inspection to multiple cores. The only solution is to use fast_accell feature for the IP addresses involved. Mind, you will not reach the wire speed, but throughput should be better than through Medium Path&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 19 Oct 2020 10:42:04 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Firewall-and-Security-Management/Bad-performance-of-23800-firewall/m-p/99484#M10389</guid>
      <dc:creator>_Val_</dc:creator>
      <dc:date>2020-10-19T10:42:04Z</dc:date>
    </item>
    <item>
      <title>Re: Bad performance of 23800 firewall</title>
      <link>https://community.checkpoint.com/t5/Firewall-and-Security-Management/Bad-performance-of-23800-firewall/m-p/99492#M10390</link>
      <description>&lt;P&gt;As&amp;nbsp;&lt;a href="https://community.checkpoint.com/t5/user/viewprofilepage/user-id/21670"&gt;@HeikoAnkenbrand&lt;/a&gt;&amp;nbsp;suggested - you must enable MQ on those interfaces:&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;eth1-01: CPU 2&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;eth1-02: CPU 3&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;eth1-03: CPU 24&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;eth1-04: CPU 25&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;Without it you won't get more than 2-3Gbps per interface.&lt;/P&gt;
&lt;P&gt;Plus upgrade - R80.20 was not such a great release. If you want the best performance on interface, go with R80.40 as you will get 3.10 kernel that has much newer drivers for multi queue.&lt;/P&gt;
&lt;P&gt;We are in process of upgrading our 23800 just for that - to eliminate RX-DRPs on interfaces when traffic is very "bursty", it can jump from 10Gbps to 20Gbps suddenly so we see a lot of RX buffer overflows&lt;/P&gt;</description>
      <pubDate>Mon, 19 Oct 2020 12:43:44 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Firewall-and-Security-Management/Bad-performance-of-23800-firewall/m-p/99492#M10390</guid>
      <dc:creator>Kaspars_Zibarts</dc:creator>
      <dc:date>2020-10-19T12:43:44Z</dc:date>
    </item>
    <item>
      <title>Re: Bad performance of 23800 firewall</title>
      <link>https://community.checkpoint.com/t5/Firewall-and-Security-Management/Bad-performance-of-23800-firewall/m-p/99564#M10391</link>
      <description>&lt;P&gt;Hi &lt;a href="https://community.checkpoint.com/t5/user/viewprofilepage/user-id/15508"&gt;@Pawan_Shukla&lt;/a&gt;,&lt;/P&gt;
&lt;P&gt;I agree with&amp;nbsp;&lt;a href="https://community.checkpoint.com/t5/user/viewprofilepage/user-id/11456"&gt;@Kaspars_Zibarts&lt;/a&gt;.&lt;/P&gt;
&lt;P&gt;Here's what I would do in this case:&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;FONT color="#3366FF"&gt;First&lt;/FONT&gt;&lt;/STRONG&gt; enable MQ:&lt;BR /&gt;By default, each network interface has one traffic queue handled by one CPU. You cannot use more CPU cores for acceleration than the number of interfaces handling traffic. Multi-Queue lets you configure more than one traffic queue for each network interface. For each interface, more than one CPU core is used for acceleration. Multi-Queue is relevant only if SecureXL is enabled.&lt;/P&gt;
&lt;P&gt;More read here:&lt;BR /&gt;&lt;A href="https://community.checkpoint.com/docs/DOC-3352-r80x-performance-tuning-tip-multi-queue" target="_blank" rel="noopener"&gt;- R80.x - Performance Tuning Tip - Multi Queue&lt;/A&gt;&lt;BR /&gt;&lt;A href="https://sc1.checkpoint.com/documents/R80.30/WebAdminGuides/EN/CP_R80.30_PerformanceTuning_AdminGuide/html_frameset.htm?topic=documents/R80.30/WebAdminGuides/EN/CP_R80.30_PerformanceTuning_AdminGuide/100935" target="_self" rel="noopener noreferrer"&gt;- Performance Tuning R80.30 Administration Guide – Multi-Queue&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Ennable MQ in this interfaces:&lt;/P&gt;
&lt;P&gt;eth1-01: CPU 2&lt;BR /&gt;eth1-02: CPU 3&lt;BR /&gt;eth1-03: CPU 24&lt;BR /&gt;eth1-04: CPU 25&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;FONT color="#3366FF"&gt;Second:&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Use "Fast Accel Rules" for internal networks. So you have less traffic on the PSLXL path or use IPS only for traffic to and from the internet on the external interface. &lt;BR /&gt;&lt;BR /&gt;More read here:&lt;BR /&gt;&lt;A href="https://community.checkpoint.com/t5/General-Topics/R80-x-Performance-Tuning-Tip-SecureXL-Fast-Accelerator-fw-ctl/td-p/67604" target="_blank" rel="noopener"&gt;- R80.x - Performance Tuning Tip - SecureXL Fast Accelerator (fw ctl fast_accel)&lt;/A&gt;&lt;BR /&gt;&lt;A href="https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails=&amp;amp;solutionid=sk156672&amp;amp;partition=Advanced&amp;amp;product=SecureXL%22" target="_self" rel="noopener noreferrer noopener noreferrer"&gt;- sk156672 - SecureXL Fast Accelerator (fw fast_accel) for R80.20 and above&lt;/A&gt;&lt;A href="https://sc1.checkpoint.com/documents/R80.30/WebAdminGuides/EN/CP_R80.30_PerformanceTuning_AdminGuide/html_frameset.htm?topic=documents/R80.30/WebAdminGuides/EN/CP_R80.30_PerformanceTuning_AdminGuide/100935" target="_self" rel="noopener noreferrer"&gt;&lt;BR /&gt;&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;FONT color="#3366FF"&gt;Third:&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Check RX errors:&lt;BR /&gt;# netstat -in&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;RX-ERR:&lt;/STRONG&gt; Should be zero.&amp;nbsp; Caused by cabling problem, electrical interference, or a bad port.&amp;nbsp; Examples: framing errors, short frames/runts, late collisions caused by duplex mismatch.&lt;BR /&gt;&lt;FONT color="#008080"&gt;&lt;STRONG&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Tip:&lt;/STRONG&gt;&lt;/FONT&gt;&amp;nbsp; First and easy check duplex mismatch&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt;RX-OVR:&lt;/STRONG&gt; Should be zero.&amp;nbsp; Overrun in NIC hardware buffering.&amp;nbsp; Solved by using a higher-speed NIC, bonding multiple interfaces, or enabling Ethernet Flow Control (controversial).&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT color="#008080"&gt;&lt;STRONG&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Tip:&lt;/STRONG&gt;&lt;/FONT&gt;&amp;nbsp; Use higher speed NIC's or bond interfaces&lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt;RX-DRP:&lt;/STRONG&gt; Should be less than 0.1% of RX-OK.&amp;nbsp; Caused by a network ring buffer overflow in the Gaia kernel due to the inability of SoftIRQ to empty the ring buffer fast enough.&amp;nbsp; Solved by allocating more SND/IRQ cores in CoreXL (always the first step), enabling Multi-Queue, or as a last resort increasing the ring buffer size.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;More tips read here:&lt;BR /&gt;&lt;A href="https://community.checkpoint.com/t5/General-Topics/R80-x-Architecture-and-Performance-Tuning-Link-Collection/m-p/47883#M9336" target="_self"&gt;- R80.x Architecture and Performance Tuning - Link Collection &lt;/A&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 20 Oct 2020 10:11:59 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Firewall-and-Security-Management/Bad-performance-of-23800-firewall/m-p/99564#M10391</guid>
      <dc:creator>HeikoAnkenbrand</dc:creator>
      <dc:date>2020-10-20T10:11:59Z</dc:date>
    </item>
  </channel>
</rss>

