<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic VSX Performance issue on r80.30 take 219 (3.10) in Firewall and Security Management</title>
    <link>https://community.checkpoint.com/t5/Firewall-and-Security-Management/VSX-Performance-issue-on-r80-30-take-219-3-10/m-p/100992#M8501</link>
    <description>&lt;P&gt;Hi CheckMates,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I'm encountering an issue since 4 months, after a migration from old 14000 boxes running on r77.30 to 15000series on r80.30.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;CPU cores were saturation could be seen imediatly after the migration, 4 Months of investigation with TAC, first issue identified was a bug that most of tcp 443/445 tarfic was taking the medium path, we used fast accel feature to staticly accelerate it, this bug should have been fixed in JHF 219, and indeed it seems to as we dont use fast accel today.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;But the performance is still an issue,&amp;nbsp; our FWKs are maxed out, and thus all Cores related during bussiness day.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Technical infos&amp;nbsp; :&lt;/P&gt;&lt;P&gt;VSX cluster running r80.30 take 219 new kernel.&lt;/P&gt;&lt;P&gt;Blades : FW + IPS&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Stats :&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;[Expert@EU93A0391-OSS:1]# fwaccel stats -s&lt;BR /&gt;Accelerated conns/Total conns : 27355/185090 (14%)&lt;BR /&gt;Accelerated pkts/Total pkts : 981191859249/1466990930587 (66%)&lt;BR /&gt;F2Fed pkts/Total pkts : 8887485442/1466990930587 (0%)&lt;BR /&gt;F2V pkts/Total pkts : 2802683200/1466990930587 (0%)&lt;BR /&gt;CPASXL pkts/Total pkts : 0/1466990930587 (0%)&lt;BR /&gt;PSLXL pkts/Total pkts : 476911585896/1466990930587 (32%)&lt;BR /&gt;QOS inbound pkts/Total pkts : 0/1466990930587 (0%)&lt;BR /&gt;QOS outbound pkts/Total pkts : 0/1466990930587 (0%)&lt;BR /&gt;Corrected pkts/Total pkts : 0/1466990930587 (0%)&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;[Expert@xxxxxxx:0]# fw ctl affinity -l&lt;BR /&gt;eth3-01: CPU 0&lt;BR /&gt;eth3-02: CPU 0&lt;BR /&gt;Mgmt: CPU 0&lt;BR /&gt;VS_0: CPU 4 5 6 7 8 9 10 11 12 13 14 15 20 21 22 23 24 25 26 27 28 29 30 31&lt;BR /&gt;VS_0 fwk: CPU 4 5 6 7 8 9 10 11 12 13 14 15 20 21 22 23 24 25 26 27 28 29 30 31&lt;BR /&gt;VS_1: CPU 4 5 6 7 8 9 10 11 12 13 14 15 20 21 22 23 24 25 26 27 28 29 30 31&lt;BR /&gt;VS_1 fwk: CPU 4 5 6 7 8 9 10 11 12 13 14 15 20 21 22 23 24 25 26 27 28 29 30 31&lt;BR /&gt;VS_1 pepd: CPU 16&lt;BR /&gt;VS_1 pdpd: CPU 17&lt;BR /&gt;VS_2: CPU 4 5 6 7 8 9 10 11 12 13 14 15 20 21 22 23 24 25 26 27 28 29 30 31&lt;BR /&gt;VS_2 fwk: CPU 4 5 6 7 8 9 10 11 12 13 14 15 20 21 22 23 24 25 26 27 28 29 30 31&lt;BR /&gt;VS_3: CPU 4 5 6 7 8 9 10 11 12 13 14 15 20 21 22 23 24 25 26 27 28 29 30 31&lt;BR /&gt;VS_3 fwk: CPU 4 5 6 7 8 9 10 11 12 13 14 15 20 21 22 23 24 25 26 27 28 29 30 31&lt;BR /&gt;Interface eth1-01: has multi queue enabled&lt;BR /&gt;Interface eth1-02: has multi queue enabled&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;[Expert@EU93A0391-OSS:1]# fw ctl multik stat&lt;BR /&gt;ID | Active | CPU | Connections | Peak&lt;BR /&gt;----------------------------------------------&lt;BR /&gt;0 | Yes | 4-15+ | 11859 | 16320&lt;BR /&gt;1 | Yes | 4-15+ | 10365 | 14918&lt;BR /&gt;2 | Yes | 4-15+ | 11278 | 14150&lt;BR /&gt;3 | Yes | 4-15+ | 10394 | 14274&lt;BR /&gt;4 | Yes | 4-15+ | 9037 | 12512&lt;BR /&gt;5 | Yes | 4-15+ | 9715 | 14493&lt;BR /&gt;6 | Yes | 4-15+ | 10179 | 13757&lt;BR /&gt;7 | Yes | 4-15+ | 10707 | 15184&lt;BR /&gt;8 | Yes | 4-15+ | 9731 | 13823&lt;BR /&gt;9 | Yes | 4-15+ | 10668 | 12959&lt;BR /&gt;10 | Yes | 4-15+ | 10530 | 15033&lt;BR /&gt;11 | Yes | 4-15+ | 9930 | 14691&lt;BR /&gt;12 | Yes | 4-15+ | 1673 | 13807&lt;BR /&gt;13 | Yes | 4-15+ | 9023 | 12942&lt;BR /&gt;14 | Yes | 4-15+ | 8952 | 14260&lt;BR /&gt;15 | Yes | 4-15+ | 9620 | 13936&lt;BR /&gt;16 | Yes | 4-15+ | 10442 | 13535&lt;BR /&gt;17 | Yes | 4-15+ | 10177 | 13863&lt;BR /&gt;18 | Yes | 4-15+ | 2597 | 13612&lt;BR /&gt;19 | Yes | 4-15+ | 9941 | 13543&lt;BR /&gt;20 | Yes | 4-15+ | 7191 | 14152&lt;BR /&gt;21 | Yes | 4-15+ | 9821 | 12975&lt;BR /&gt;22 | Yes | 4-15+ | 9721 | 12999&lt;BR /&gt;23 | Yes | 4-15+ | 6742 | 13279&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;DIV class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;&lt;P&gt;Main VS&amp;nbsp; runing 18 fwk instances.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;All cpu cores handeling VS traffics are on avg 80% usage, and most of them maxing out during bussiness hours.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We have no lead atm with TAC (prety slow response, and knowledge to be honest).&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Tue, 03 Nov 2020 10:19:14 GMT</pubDate>
    <dc:creator>Khalid_Aftas</dc:creator>
    <dc:date>2020-11-03T10:19:14Z</dc:date>
    <item>
      <title>VSX Performance issue on r80.30 take 219 (3.10)</title>
      <link>https://community.checkpoint.com/t5/Firewall-and-Security-Management/VSX-Performance-issue-on-r80-30-take-219-3-10/m-p/100992#M8501</link>
      <description>&lt;P&gt;Hi CheckMates,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I'm encountering an issue since 4 months, after a migration from old 14000 boxes running on r77.30 to 15000series on r80.30.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;CPU cores were saturation could be seen imediatly after the migration, 4 Months of investigation with TAC, first issue identified was a bug that most of tcp 443/445 tarfic was taking the medium path, we used fast accel feature to staticly accelerate it, this bug should have been fixed in JHF 219, and indeed it seems to as we dont use fast accel today.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;But the performance is still an issue,&amp;nbsp; our FWKs are maxed out, and thus all Cores related during bussiness day.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Technical infos&amp;nbsp; :&lt;/P&gt;&lt;P&gt;VSX cluster running r80.30 take 219 new kernel.&lt;/P&gt;&lt;P&gt;Blades : FW + IPS&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Stats :&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;[Expert@EU93A0391-OSS:1]# fwaccel stats -s&lt;BR /&gt;Accelerated conns/Total conns : 27355/185090 (14%)&lt;BR /&gt;Accelerated pkts/Total pkts : 981191859249/1466990930587 (66%)&lt;BR /&gt;F2Fed pkts/Total pkts : 8887485442/1466990930587 (0%)&lt;BR /&gt;F2V pkts/Total pkts : 2802683200/1466990930587 (0%)&lt;BR /&gt;CPASXL pkts/Total pkts : 0/1466990930587 (0%)&lt;BR /&gt;PSLXL pkts/Total pkts : 476911585896/1466990930587 (32%)&lt;BR /&gt;QOS inbound pkts/Total pkts : 0/1466990930587 (0%)&lt;BR /&gt;QOS outbound pkts/Total pkts : 0/1466990930587 (0%)&lt;BR /&gt;Corrected pkts/Total pkts : 0/1466990930587 (0%)&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;[Expert@xxxxxxx:0]# fw ctl affinity -l&lt;BR /&gt;eth3-01: CPU 0&lt;BR /&gt;eth3-02: CPU 0&lt;BR /&gt;Mgmt: CPU 0&lt;BR /&gt;VS_0: CPU 4 5 6 7 8 9 10 11 12 13 14 15 20 21 22 23 24 25 26 27 28 29 30 31&lt;BR /&gt;VS_0 fwk: CPU 4 5 6 7 8 9 10 11 12 13 14 15 20 21 22 23 24 25 26 27 28 29 30 31&lt;BR /&gt;VS_1: CPU 4 5 6 7 8 9 10 11 12 13 14 15 20 21 22 23 24 25 26 27 28 29 30 31&lt;BR /&gt;VS_1 fwk: CPU 4 5 6 7 8 9 10 11 12 13 14 15 20 21 22 23 24 25 26 27 28 29 30 31&lt;BR /&gt;VS_1 pepd: CPU 16&lt;BR /&gt;VS_1 pdpd: CPU 17&lt;BR /&gt;VS_2: CPU 4 5 6 7 8 9 10 11 12 13 14 15 20 21 22 23 24 25 26 27 28 29 30 31&lt;BR /&gt;VS_2 fwk: CPU 4 5 6 7 8 9 10 11 12 13 14 15 20 21 22 23 24 25 26 27 28 29 30 31&lt;BR /&gt;VS_3: CPU 4 5 6 7 8 9 10 11 12 13 14 15 20 21 22 23 24 25 26 27 28 29 30 31&lt;BR /&gt;VS_3 fwk: CPU 4 5 6 7 8 9 10 11 12 13 14 15 20 21 22 23 24 25 26 27 28 29 30 31&lt;BR /&gt;Interface eth1-01: has multi queue enabled&lt;BR /&gt;Interface eth1-02: has multi queue enabled&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;[Expert@EU93A0391-OSS:1]# fw ctl multik stat&lt;BR /&gt;ID | Active | CPU | Connections | Peak&lt;BR /&gt;----------------------------------------------&lt;BR /&gt;0 | Yes | 4-15+ | 11859 | 16320&lt;BR /&gt;1 | Yes | 4-15+ | 10365 | 14918&lt;BR /&gt;2 | Yes | 4-15+ | 11278 | 14150&lt;BR /&gt;3 | Yes | 4-15+ | 10394 | 14274&lt;BR /&gt;4 | Yes | 4-15+ | 9037 | 12512&lt;BR /&gt;5 | Yes | 4-15+ | 9715 | 14493&lt;BR /&gt;6 | Yes | 4-15+ | 10179 | 13757&lt;BR /&gt;7 | Yes | 4-15+ | 10707 | 15184&lt;BR /&gt;8 | Yes | 4-15+ | 9731 | 13823&lt;BR /&gt;9 | Yes | 4-15+ | 10668 | 12959&lt;BR /&gt;10 | Yes | 4-15+ | 10530 | 15033&lt;BR /&gt;11 | Yes | 4-15+ | 9930 | 14691&lt;BR /&gt;12 | Yes | 4-15+ | 1673 | 13807&lt;BR /&gt;13 | Yes | 4-15+ | 9023 | 12942&lt;BR /&gt;14 | Yes | 4-15+ | 8952 | 14260&lt;BR /&gt;15 | Yes | 4-15+ | 9620 | 13936&lt;BR /&gt;16 | Yes | 4-15+ | 10442 | 13535&lt;BR /&gt;17 | Yes | 4-15+ | 10177 | 13863&lt;BR /&gt;18 | Yes | 4-15+ | 2597 | 13612&lt;BR /&gt;19 | Yes | 4-15+ | 9941 | 13543&lt;BR /&gt;20 | Yes | 4-15+ | 7191 | 14152&lt;BR /&gt;21 | Yes | 4-15+ | 9821 | 12975&lt;BR /&gt;22 | Yes | 4-15+ | 9721 | 12999&lt;BR /&gt;23 | Yes | 4-15+ | 6742 | 13279&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;DIV class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;&lt;P&gt;Main VS&amp;nbsp; runing 18 fwk instances.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;All cpu cores handeling VS traffics are on avg 80% usage, and most of them maxing out during bussiness hours.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We have no lead atm with TAC (prety slow response, and knowledge to be honest).&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 03 Nov 2020 10:19:14 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Firewall-and-Security-Management/VSX-Performance-issue-on-r80-30-take-219-3-10/m-p/100992#M8501</guid>
      <dc:creator>Khalid_Aftas</dc:creator>
      <dc:date>2020-11-03T10:19:14Z</dc:date>
    </item>
    <item>
      <title>Re: VSX Performance issue on r80.30 take 219 (3.10)</title>
      <link>https://community.checkpoint.com/t5/Firewall-and-Security-Management/VSX-Performance-issue-on-r80-30-take-219-3-10/m-p/101015#M8502</link>
      <description>&lt;P&gt;In your case the high CPU may be caused by two things:&lt;/P&gt;
&lt;P&gt;1) IPS - to identify if this is where you need to focus your efforts, try temporarily disabling IPS in all VS's and see how that impacts CPU usage in the fwk's.&amp;nbsp; If it reduces it substantially you need to tune your IPS config.&lt;/P&gt;
&lt;P&gt;2) Rulebase lookup overhead - your connections templating rate is somewhat low (14%), please post output of &lt;STRONG&gt;fwaccel stat&lt;/STRONG&gt; and try to make sure templating is enabled as far as possible in your rulebase.&lt;/P&gt;
&lt;P&gt;Also provide output of &lt;STRONG&gt;netstat -ni&lt;/STRONG&gt; to ensure network is running cleanly, and&lt;STRONG&gt; free -m&lt;/STRONG&gt; to ensure box is not low on memory and swapping.&lt;/P&gt;
&lt;P&gt;Tagging&amp;nbsp;&lt;a href="https://community.checkpoint.com/t5/user/viewprofilepage/user-id/11456"&gt;@Kaspars_Zibarts&lt;/a&gt;&amp;nbsp;as well.&lt;/P&gt;</description>
      <pubDate>Tue, 03 Nov 2020 12:35:44 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Firewall-and-Security-Management/VSX-Performance-issue-on-r80-30-take-219-3-10/m-p/101015#M8502</guid>
      <dc:creator>Timothy_Hall</dc:creator>
      <dc:date>2020-11-03T12:35:44Z</dc:date>
    </item>
    <item>
      <title>Re: VSX Performance issue on r80.30 take 219 (3.10)</title>
      <link>https://community.checkpoint.com/t5/Firewall-and-Security-Management/VSX-Performance-issue-on-r80-30-take-219-3-10/m-p/101021#M8503</link>
      <description>&lt;P&gt;Thanks a lot Timothy&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;1) IPS : i disabled ips via cli "ips off -n" on the main VS, waited 30min to see, and no improvement&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;2) Rulebase : we are running the same rulebase as in 77.30, it is quiet big one (4000+ rules), there is a separate project to segment/clean it, but i would assume that 80.30 should handel it ?&lt;/P&gt;&lt;P&gt;Output requested :&lt;/P&gt;&lt;P&gt;[Expert@EU93A0391-OSS:1]# fwaccel stat&lt;BR /&gt;+-----------------------------------------------------------------------------+&lt;BR /&gt;|Id|Name |Status |Interfaces |Features |&lt;BR /&gt;+-----------------------------------------------------------------------------+&lt;BR /&gt;|0 |SND |enabled |eth3-01,eth3-02,eth1-01, |&lt;BR /&gt;| | | |eth1-02 |Acceleration,Cryptography |&lt;BR /&gt;| | | | |Crypto: Tunnel,UDPEncap,MD5, |&lt;BR /&gt;| | | | |SHA1,NULL,3DES,DES,CAST, |&lt;BR /&gt;| | | | |CAST-40,AES-128,AES-256,ESP, |&lt;BR /&gt;| | | | |LinkSelection,DynamicVPN, |&lt;BR /&gt;| | | | |NatTraversal,AES-XCBC,SHA256 |&lt;BR /&gt;+-----------------------------------------------------------------------------+&lt;/P&gt;&lt;P&gt;Accept Templates : disabled by Firewall&lt;BR /&gt;Layer Policy-DC-IDMZ Security disables template offloads from rule #175&lt;BR /&gt;Throughput acceleration still enabled.&lt;BR /&gt;Drop Templates : enabled&lt;BR /&gt;NAT Templates : disabled by Firewall&lt;BR /&gt;Layer Policy-DC-IDMZ Security disables template offloads from rule #175&lt;BR /&gt;Throughput acceleration still enabled.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;[Expert@EU93A0391-OSS:1]# netstat -ni&lt;BR /&gt;Kernel Interface table&lt;BR /&gt;Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg&lt;BR /&gt;bond5.401 1500 0 439555314550 0 0 0 257437198770 0 0 0 BMRU&lt;BR /&gt;bond5.948 1500 0 18722898 0 0 0 6879533 0 0 0 BMRU&lt;BR /&gt;bond5.1500 1500 0 0 0 0 0 28447 0 0 0 BMRU&lt;BR /&gt;bond5.1600 1500 0 19704717405 0 0 0 4756127947 0 0 0 BMRU&lt;BR /&gt;bond5.1610 1500 0 5019853613 0 0 0 18546394916 0 0 0 BMRU&lt;BR /&gt;bond5.1620 1500 0 1520265560 0 0 0 495719503 0 0 0 BMRU&lt;BR /&gt;bond5.1630 1500 0 552798593 0 0 0 1794900083 0 0 0 BMRU&lt;BR /&gt;bond5.1640 1500 0 91729713 0 0 0 92326654 0 0 0 BMRU&lt;BR /&gt;bond5.1650 1500 0 223597930 0 0 0 1329620910 0 0 0 BMRU&lt;BR /&gt;bond5.1660 1500 0 81875355 0 0 0 77538102 0 0 0 BMRU&lt;BR /&gt;bond5.1670 1500 0 141627455 0 0 0 94948303 0 0 0 BMRU&lt;BR /&gt;bond5.1680 1500 0 2824010896 0 0 0 8154361713 0 0 0 BMRU&lt;BR /&gt;bond5.1690 1500 0 849356879 0 0 0 340910789 0 0 0 BMRU&lt;BR /&gt;bond5.1691 1500 0 187572956 0 0 0 27545569 0 0 0 BMRU&lt;BR /&gt;bond5.1692 1500 0 353739673 0 0 0 395304253 0 0 0 BMRU&lt;BR /&gt;bond5.1760 1500 0 293278479 0 0 0 90423211 0 0 0 BMRU&lt;BR /&gt;bond5.1770 1500 0 62209259588 0 0 0 34204939878 0 0 0 BMRU&lt;BR /&gt;bond5.1780 1500 0 35720 0 0 0 26746 0 0 0 BMRU&lt;BR /&gt;bond5.2100 1500 0 780575632 0 0 0 441135663 0 0 0 BMRU&lt;BR /&gt;bond5.2102 1500 0 14 0 0 0 27556 0 0 0 BMRU&lt;BR /&gt;bond5.2103 1500 0 2186828356 0 1765878 0 2383220866 0 0 0 BMRU&lt;BR /&gt;bond5.2107 1500 0 429641284 0 1394 0 404485052 0 0 0 BMRU&lt;BR /&gt;bond5.2111 1500 0 76241432 0 4192 0 131623315 0 0 0 BMRU&lt;BR /&gt;bond5.2113 1500 0 260697918 0 0 0 151394245 0 0 0 BMRU&lt;BR /&gt;bond5.2115 1500 0 176 0 0 0 514948 0 0 0 BMRU&lt;BR /&gt;bond5.2116 1500 0 47740157462 0 145183 0 43532801495 0 0 0 BMRU&lt;BR /&gt;bond5.2117 1500 0 88470001929 0 30100 0 349959453544 0 0 0 BMRU&lt;BR /&gt;bond5.2122 1500 0 1400579562 0 7632 0 1538412610 0 0 0 BMRU&lt;BR /&gt;bond5.2123 1500 0 579411 0 684 0 2732185 0 0 0 BMRU&lt;BR /&gt;bond5.2124 1500 0 114179992 0 110992 0 126049379 0 0 0 BMRU&lt;BR /&gt;bond5.2127 1500 0 2345030293 0 1561 0 29915630 0 0 0 BMRU&lt;BR /&gt;bond5.2128 1500 0 1668734424 0 101522 0 1319899210 0 0 0 BMRU&lt;BR /&gt;bond5.2129 1500 0 14 0 0 0 26746 0 0 0 BMRU&lt;BR /&gt;bond5.2131 1500 0 29 0 0 0 30292 0 0 0 BMRU&lt;BR /&gt;bond5.2132 1500 0 1149982998 0 3124 0 1147661871 0 0 0 BMRU&lt;BR /&gt;bond5.2133 1500 0 361757781 0 36219 0 365482523 0 0 0 BMRU&lt;BR /&gt;bond5.2134 1500 0 3010563555 0 103765 0 3513107603 0 0 0 BMRU&lt;BR /&gt;bond5.2135 1500 0 34461707 0 5177 0 38349343 0 0 0 BMRU&lt;BR /&gt;bond5.2136 1500 0 2761422913 0 45039 0 4404252449 0 0 0 BMRU&lt;BR /&gt;bond5.2137 1500 0 797890 0 0 0 26746 0 0 0 BMRU&lt;BR /&gt;bond5.2138 1500 0 3593125 0 1864 0 5883503 0 0 0 BMRU&lt;BR /&gt;bond5.2139 1500 0 23456805357 0 3536643 0 26989917876 0 0 0 BMRU&lt;BR /&gt;bond5.2140 1500 0 201587474 0 754 0 364655031 0 0 0 BMRU&lt;BR /&gt;bond5.2144 1500 0 154414820 0 29038 0 159334409 0 0 0 BMRU&lt;BR /&gt;bond5.2145 1500 0 37613516 0 9563 0 37515140 0 0 0 BMRU&lt;BR /&gt;bond5.2146 1500 0 249394262 0 1963 0 220754407 0 0 0 BMRU&lt;BR /&gt;bond5.2147 1500 0 6596859 0 633 0 8095319 0 0 0 BMRU&lt;BR /&gt;bond5.2148 1500 0 29853400 0 699 0 12331517 0 0 0 BMRU&lt;BR /&gt;bond5.2150 1500 0 749401 0 0 0 26797 0 0 0 BMRU&lt;BR /&gt;bond5.2151 1500 0 1222499524 0 0 0 1092391920 0 0 0 BMRU&lt;BR /&gt;bond5.2152 1500 0 584696119 0 0 0 609351194 0 0 0 BMRU&lt;BR /&gt;bond5.2153 1500 0 298373 0 0 0 26746 0 0 0 BMRU&lt;BR /&gt;bond5.2154 1500 0 26856251645 0 0 0 24230177669 0 0 0 BMRU&lt;BR /&gt;bond5.2156 1500 0 794499 0 0 0 26746 0 0 0 BMRU&lt;BR /&gt;bond5.2159 1500 0 816268 0 0 0 1408643 0 0 0 BMRU&lt;BR /&gt;bond5.2160 1500 0 192393432 0 1258 0 161404518 0 0 0 BMRU&lt;BR /&gt;bond5.2161 1500 0 1178119490 0 8 0 382957444 0 0 0 BMRU&lt;BR /&gt;bond5.2162 1500 0 675023067 0 1256 0 589829812 0 0 0 BMRU&lt;BR /&gt;bond5.2163 1500 0 2339131127 0 0 0 918364517 0 0 0 BMRU&lt;BR /&gt;bond5.2166 1500 0 299501412 0 20076 0 287746578 0 0 0 BMRU&lt;BR /&gt;bond5.2167 1500 0 5986133952 0 425775 0 13625094296 0 0 0 BMRU&lt;BR /&gt;bond5.2168 1500 0 596744 0 645 0 1423609 0 0 0 BMRU&lt;BR /&gt;bond5.2170 1500 0 1451549 0 656 0 2563649 0 0 0 BMRU&lt;BR /&gt;bond5.2171 1500 0 5262270 0 0 0 4164222 0 0 0 BMRU&lt;BR /&gt;bond5.2172 1500 0 14 0 0 0 26746 0 0 0 BMRU&lt;BR /&gt;bond5.2173 1500 0 41572185 0 1406 0 55701109 0 0 0 BMRU&lt;BR /&gt;bond5.2174 1500 0 61123605 0 0 0 41553070 0 0 0 BMRU&lt;BR /&gt;bond5.2175 1500 0 31794091 0 3213 0 69368598 0 0 0 BMRU&lt;BR /&gt;bond5.2176 1500 0 104855617 0 5319 0 110506388 0 0 0 BMRU&lt;BR /&gt;bond5.2177 1500 0 2226770473 0 2994 0 4925383331 0 0 0 BMRU&lt;BR /&gt;bond5.2178 1500 0 20677968466 0 590155 0 37310856614 0 0 0 BMRU&lt;BR /&gt;bond5.2179 1500 0 637946479 0 772 0 3056028089 0 0 0 BMRU&lt;BR /&gt;bond5.2181 1500 0 65646804 0 0 0 5012538 0 0 0 BMRU&lt;BR /&gt;bond5.2182 1500 0 17445607 0 0 0 17839615 0 0 0 BMRU&lt;BR /&gt;bond5.2183 1500 0 14 0 0 0 26746 0 0 0 BMRU&lt;BR /&gt;bond5.2184 1500 0 14 0 0 0 26746 0 0 0 BMRU&lt;BR /&gt;bond5.2185 1500 0 14 0 0 0 26746 0 0 0 BMRU&lt;BR /&gt;bond5.2186 1500 0 14 0 0 0 26746 0 0 0 BMRU&lt;BR /&gt;bond5.2187 1500 0 19021155 0 807 0 23059082 0 0 0 BMRU&lt;BR /&gt;bond5.2188 1500 0 16740545 0 825 0 13776664 0 0 0 BMRU&lt;BR /&gt;bond5.2189 1500 0 139258849 0 1083 0 47397173 0 0 0 BMRU&lt;BR /&gt;bond5.2190 1500 0 1715228 0 617 0 1440492 0 0 0 BMRU&lt;BR /&gt;bond5.2193 1500 0 5832067655 0 0 0 2659849948 0 0 0 BMRU&lt;BR /&gt;bond5.2194 1500 0 792529249 0 6 0 123736423 0 0 0 BMRU&lt;BR /&gt;bond5.2195 1500 0 35423574 0 798 0 31496471 0 0 0 BMRU&lt;BR /&gt;bond5.2196 1500 0 25058549 0 833 0 17697807 0 0 0 BMRU&lt;BR /&gt;bond5.2198 1500 0 199847301 0 1231 0 104357281 0 0 0 BMRU&lt;BR /&gt;bond5.2199 1500 0 9470521 0 740 0 9619069 0 0 0 BMRU&lt;BR /&gt;bond5.2213 1500 0 1052693731 0 97378 0 868371511 0 0 0 BMRU&lt;BR /&gt;bond5.2214 1500 0 2156084706 0 9979 0 1907390609 0 0 0 BMRU&lt;BR /&gt;bond5.2215 1500 0 4453200104 0 10031 0 4134079969 0 0 0 BMRU&lt;BR /&gt;bond5.2216 1500 0 5084653877 0 37634 0 4072045428 0 0 0 BMRU&lt;BR /&gt;bond5.2217 1500 0 9750016673 0 5650 0 8115083087 0 0 0 BMRU&lt;BR /&gt;bond5.2219 1500 0 1593491 0 1338 0 3735884 0 0 0 BMRU&lt;BR /&gt;bond5.2222 1500 0 73782280360 0 0 0 56292910658 0 0 0 BMRU&lt;BR /&gt;bond5.2224 1500 0 17768206 0 754 0 17696308 0 0 0 BMRU&lt;BR /&gt;bond5.2225 1500 0 292379821 0 1936 0 375455246 0 0 0 BMRU&lt;BR /&gt;bond5.2226 1500 0 6181747 0 5064 0 13853282 0 0 0 BMRU&lt;BR /&gt;bond5.2227 1500 0 5942399460 0 0 0 2670597905 0 0 0 BMRU&lt;BR /&gt;bond5.2228 1500 0 17930046 0 6331 0 21380554 0 0 0 BMRU&lt;BR /&gt;bond5.2229 1500 0 18739721 0 855 0 32815209 0 0 0 BMRU&lt;BR /&gt;bond5.2230 1500 0 1651833 0 29058 0 3605339 0 0 0 BMRU&lt;BR /&gt;bond5.2231 1500 0 1529707 0 55977 0 3821086 0 0 0 BMRU&lt;BR /&gt;bond5.2232 1500 0 2404849 0 84181 0 5909107 0 0 0 BMRU&lt;BR /&gt;bond5.2233 1500 0 14 0 0 0 26746 0 0 0 BMRU&lt;BR /&gt;bond5.2235 1500 0 160873543 0 770 0 168940118 0 0 0 BMRU&lt;BR /&gt;bond5.2236 1500 0 1478376616 0 0 0 1413058691 0 0 0 BMRU&lt;BR /&gt;bond5.2237 1500 0 21208641021 0 435 0 1588532538 0 0 0 BMRU&lt;BR /&gt;bond5.2239 1500 0 14157374631 0 393 0 1099860052 0 0 0 BMRU&lt;BR /&gt;bond5.2240 1500 0 4679360634 0 2882 0 4553414777 0 0 0 BMRU&lt;BR /&gt;bond5.2241 1500 0 11575305 0 1394 0 5956715 0 0 0 BMRU&lt;BR /&gt;bond5.2242 1500 0 3537749319 0 208 0 3362137462 0 0 0 BMRU&lt;BR /&gt;bond5.2244 1500 0 895454 0 571 0 1655734 0 0 0 BMRU&lt;BR /&gt;bond5.2245 1500 0 1415644 0 823 0 2192337 0 0 0 BMRU&lt;BR /&gt;bond5.2246 1500 0 30474841207 0 164321 0 16301098016 0 0 0 BMRU&lt;BR /&gt;bond5.2247 1500 0 10382442501 0 53764 0 5363982512 0 0 0 BMRU&lt;BR /&gt;bond5.2248 1500 0 2097582 0 2162 0 5736814 0 0 0 BMRU&lt;BR /&gt;bond5.2249 1500 0 14 0 0 0 26746 0 0 0 BMRU&lt;BR /&gt;bond5.2252 1500 0 629377 0 740 0 1652008 0 0 0 BMRU&lt;BR /&gt;bond5.2253 1500 0 579918380 0 849 0 239511579 0 0 0 BMRU&lt;BR /&gt;bond5.2254 1500 0 266302766 0 1988 0 600819980 0 0 0 BMRU&lt;BR /&gt;bond5.2260 1500 0 882595 0 588 0 2039218 0 0 0 BMRU&lt;BR /&gt;bond5.2261 1500 0 973295 0 684 0 1994880 0 0 0 BMRU&lt;BR /&gt;bond5.2262 1500 0 14 0 0 0 26746 0 0 0 BMRU&lt;BR /&gt;bond5.2263 1500 0 697306449 0 7806 0 516001193 0 0 0 BMRU&lt;BR /&gt;bond5.2265 1500 0 1286938 0 8050 0 1210043 0 0 0 BMRU&lt;BR /&gt;bond5.2266 1500 0 1031939 0 28303 0 2162165 0 0 0 BMRU&lt;BR /&gt;bond5.2267 1500 0 19418775 0 0 0 742892 0 0 0 BMRU&lt;BR /&gt;bond5.2268 1500 0 308672260 0 0 0 114853234 0 0 0 BMRU&lt;BR /&gt;bond5.2269 1500 0 187147005 0 0 0 139799886 0 0 0 BMRU&lt;BR /&gt;bond5.2270 1500 0 123163467 0 2153 0 134900781 0 0 0 BMRU&lt;BR /&gt;bond5.2271 1500 0 1934791 0 1055 0 2861251 0 0 0 BMRU&lt;BR /&gt;bond5.2272 1500 0 1610975970 0 0 0 825129292 0 0 0 BMRU&lt;BR /&gt;bond5.2273 1500 0 929357930 0 0 0 1145801265 0 0 0 BMRU&lt;BR /&gt;bond5.2274 1500 0 1307668 0 698 0 1644492 0 0 0 BMRU&lt;BR /&gt;bond5.2275 1500 0 1595650 0 712 0 2091858 0 0 0 BMRU&lt;BR /&gt;bond5.2276 1500 0 867119 0 762 0 1225813 0 0 0 BMRU&lt;BR /&gt;bond5.2277 1500 0 3746523 0 1250 0 4994907 0 0 0 BMRU&lt;BR /&gt;bond5.2278 1500 0 4013456 0 1821 0 4138306 0 0 0 BMRU&lt;BR /&gt;bond5.2279 1500 0 20204308 0 3104 0 7746010 0 0 0 BMRU&lt;BR /&gt;bond5.2280 1500 0 263184702 0 96856 0 61195444 0 0 0 BMRU&lt;BR /&gt;bond5.2283 1500 0 10909944 0 3281 0 11572570 0 0 0 BMRU&lt;BR /&gt;bond5.2284 1500 0 74248379 0 4610 0 36568516 0 0 0 BMRU&lt;BR /&gt;bond5.2285 1500 0 27528449 0 4559 0 26128790 0 0 0 BMRU&lt;BR /&gt;bond5.2286 1500 0 1559113 0 1377 0 778844 0 0 0 BMRU&lt;BR /&gt;bond5.2288 1500 0 224198938 0 6186 0 429506348 0 0 0 BMRU&lt;BR /&gt;bond5.2289 1500 0 103129184 0 34 0 22523139 0 0 0 BMRU&lt;BR /&gt;bond5.2290 1500 0 5827304 0 43 0 6834018 0 0 0 BMRU&lt;BR /&gt;bond5.2291 1500 0 789452 0 665 0 1746901 0 0 0 BMRU&lt;BR /&gt;bond5.2292 1500 0 3618501 0 1339 0 4996305 0 0 0 BMRU&lt;BR /&gt;bond5.2294 1500 0 7932890519 0 0 0 6628345176 0 0 0 BMRU&lt;BR /&gt;bond5.2296 1500 0 14 0 0 0 26746 0 0 0 BMRU&lt;BR /&gt;bond5.2297 1500 0 11489393 0 643 0 191288 0 0 0 BMRU&lt;BR /&gt;bond5.2298 1500 0 0 0 0 0 0 0 0 0 BMRU&lt;BR /&gt;bond5.2299 1500 0 0 0 0 0 0 0 0 0 BMRU&lt;BR /&gt;bond5.2300 1500 0 22123629562 0 37 0 22922508494 0 0 0 BMRU&lt;BR /&gt;bond5.2310 1500 0 195299128 0 0 0 188572828 0 0 0 BMRU&lt;BR /&gt;bond5.2311 1500 0 66983363 0 0 0 74283710 0 0 0 BMRU&lt;BR /&gt;bond5.2312 1500 0 225225167 0 0 0 140419517 0 0 0 BMRU&lt;BR /&gt;bond5.2313 1500 0 4868684 0 0 0 667863 0 0 0 BMRU&lt;BR /&gt;bond5.2314 1500 0 2380808 0 0 0 1762732 0 0 0 BMRU&lt;BR /&gt;bond5.2315 1500 0 14 0 0 0 26746 0 0 0 BMRU&lt;BR /&gt;bond5.2316 1500 0 1078240346 0 5941 0 1547178553 0 0 0 BMRU&lt;BR /&gt;bond5.2501 1500 0 69153285 0 0 0 31333069 0 0 0 BMRU&lt;BR /&gt;bond5.2511 1500 0 14 0 0 0 26746 0 0 0 BMRU&lt;BR /&gt;bond5.2512 1500 0 575959 0 830 0 1034694 0 0 0 BMRU&lt;BR /&gt;bond5.2991 1500 0 4284698 0 594 0 4307162 0 0 0 BMRU&lt;BR /&gt;lo 65536 0 6406582 0 0 0 6406582 0 0 0 ALMdORU&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;[Expert@EU93A0391-OSS:1]# free -m&lt;BR /&gt;total used free shared buff/cache available&lt;BR /&gt;Mem: 63952 12921 42358 14 8672 48981&lt;BR /&gt;Swap: 32767 0 32767&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 03 Nov 2020 13:04:24 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Firewall-and-Security-Management/VSX-Performance-issue-on-r80-30-take-219-3-10/m-p/101021#M8503</guid>
      <dc:creator>Khalid_Aftas</dc:creator>
      <dc:date>2020-11-03T13:04:24Z</dc:date>
    </item>
    <item>
      <title>Re: VSX Performance issue on r80.30 take 219 (3.10)</title>
      <link>https://community.checkpoint.com/t5/Firewall-and-Security-Management/VSX-Performance-issue-on-r80-30-take-219-3-10/m-p/101027#M8504</link>
      <description>&lt;P&gt;How's the performance of SND cores btw, they are not overloaded?&lt;/P&gt;
&lt;P&gt;Reading between lines your MQ is configured for 0,1,16,17 cores? Could you send output of &lt;STRONG&gt;mq_mgt --show&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;BTW, this config below is wrong as I suspect that cores 16 and 17 are used for MQ:&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;VS_1 pepd: CPU 16&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;VS_1 pdpd: CPU 17&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Can you also share output of &lt;STRONG&gt;top&lt;/STRONG&gt;, not the first run though so we get realistic figures&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;How many FWK instances do you have configured for each VS? VS1 is obviously 24.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Can you share cpview output for the busy VS (throughput, concurrent connections, CPS and PPS figures from network part)&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 03 Nov 2020 13:41:35 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Firewall-and-Security-Management/VSX-Performance-issue-on-r80-30-take-219-3-10/m-p/101027#M8504</guid>
      <dc:creator>Kaspars_Zibarts</dc:creator>
      <dc:date>2020-11-03T13:41:35Z</dc:date>
    </item>
    <item>
      <title>Re: VSX Performance issue on r80.30 take 219 (3.10)</title>
      <link>https://community.checkpoint.com/t5/Firewall-and-Security-Management/VSX-Performance-issue-on-r80-30-take-219-3-10/m-p/101034#M8505</link>
      <description>&lt;P&gt;re-run your netstat command on VS0 instead&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 03 Nov 2020 14:48:15 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Firewall-and-Security-Management/VSX-Performance-issue-on-r80-30-take-219-3-10/m-p/101034#M8505</guid>
      <dc:creator>Kaspars_Zibarts</dc:creator>
      <dc:date>2020-11-03T14:48:15Z</dc:date>
    </item>
    <item>
      <title>Re: VSX Performance issue on r80.30 take 219 (3.10)</title>
      <link>https://community.checkpoint.com/t5/Firewall-and-Security-Management/VSX-Performance-issue-on-r80-30-take-219-3-10/m-p/101036#M8506</link>
      <description>&lt;P&gt;MQ does indeed use 16 17, those cores barely do anything, we also appointed pepd to them (barely used too)&lt;/P&gt;&lt;P&gt;VS1 is the main VS (DataCenter core) others VSs are for testing/lab, with very low traffic.&lt;/P&gt;&lt;P&gt;For the TOP stats (i will summarize it in stead of pasting, as atm there is a workaround in place, more info below &lt;span class="lia-unicode-emoji" title=":disappointed_face:"&gt;😞&lt;/span&gt;&lt;/P&gt;&lt;P&gt;Most CPU cores assigned to VS1 were high, and 4 of them always maxing out&lt;/P&gt;&lt;P&gt;FWKs for vs 1 were also maxing out.&lt;/P&gt;&lt;P&gt;I enabled back fast accel feature to force acceleration for some traffic (mainly https/smb) as we had as a workaround before (bug before JHF 219 where https/smb among others are not accelerated) and there is a visible improvement, 25% less usage right away.&lt;/P&gt;&lt;P&gt;I'm again suspecting the way this kind of traffic is being handeld on this version.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Is it possible to dump (via linux) the content of the traffic being handeled by a specific fwk process on vsx ? in the past CP engineer was able to do so but i don't find the commands.&lt;/P&gt;</description>
      <pubDate>Tue, 03 Nov 2020 14:56:09 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Firewall-and-Security-Management/VSX-Performance-issue-on-r80-30-take-219-3-10/m-p/101036#M8506</guid>
      <dc:creator>Khalid_Aftas</dc:creator>
      <dc:date>2020-11-03T14:56:09Z</dc:date>
    </item>
    <item>
      <title>Re: VSX Performance issue on r80.30 take 219 (3.10)</title>
      <link>https://community.checkpoint.com/t5/Firewall-and-Security-Management/VSX-Performance-issue-on-r80-30-take-219-3-10/m-p/101037#M8507</link>
      <description>&lt;P&gt;it would still help to see top output, don't need all of it but say top 5-10 processes&lt;/P&gt;</description>
      <pubDate>Tue, 03 Nov 2020 14:58:45 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Firewall-and-Security-Management/VSX-Performance-issue-on-r80-30-take-219-3-10/m-p/101037#M8507</guid>
      <dc:creator>Kaspars_Zibarts</dc:creator>
      <dc:date>2020-11-03T14:58:45Z</dc:date>
    </item>
    <item>
      <title>Re: VSX Performance issue on r80.30 take 219 (3.10)</title>
      <link>https://community.checkpoint.com/t5/Firewall-and-Security-Management/VSX-Performance-issue-on-r80-30-take-219-3-10/m-p/101038#M8508</link>
      <description>&lt;P&gt;Before fastaccel workaround, 6 of fwk were maxing out 99.99%, and cpu core maxing out also (fliping from one to another)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;top - 15:59:45 up 18 days, 16:17, 1 user, load average: 17.60, 17.78, 18.18&lt;BR /&gt;Tasks: 546 total, 2 running, 544 sleeping, 0 stopped, 0 zombie&lt;BR /&gt;%Cpu0 : 0.0 us, 1.6 sy, 0.0 ni, 82.0 id, 0.0 wa, 0.0 hi, 16.4 si, 0.0 st&lt;BR /&gt;%Cpu1 : 0.0 us, 0.0 sy, 0.0 ni, 80.0 id, 0.0 wa, 0.0 hi, 20.0 si, 0.0 st&lt;BR /&gt;%Cpu2 : 0.0 us, 0.0 sy, 0.0 ni, 84.7 id, 0.0 wa, 0.0 hi, 15.3 si, 0.0 st&lt;BR /&gt;%Cpu3 : 0.0 us, 1.9 sy, 0.0 ni, 43.4 id, 0.0 wa, 0.0 hi, 54.7 si, 0.0 st&lt;BR /&gt;%Cpu4 : 45.5 us, 13.9 sy, 0.0 ni, 40.6 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st&lt;BR /&gt;%Cpu5 : 86.3 us, 12.7 sy, 0.0 ni, 1.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st&lt;BR /&gt;%Cpu6 : 32.0 us, 6.0 sy, 0.0 ni, 62.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st&lt;BR /&gt;%Cpu7 : 50.5 us, 13.9 sy, 0.0 ni, 35.6 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st&lt;BR /&gt;%Cpu8 : 49.5 us, 10.7 sy, 0.0 ni, 39.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st&lt;BR /&gt;%Cpu9 : 37.8 us, 10.2 sy, 0.0 ni, 52.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st&lt;BR /&gt;%Cpu10 : 43.1 us, 9.8 sy, 0.0 ni, 47.1 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st&lt;BR /&gt;%Cpu11 : 40.4 us, 10.1 sy, 0.0 ni, 49.5 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st&lt;BR /&gt;%Cpu12 : 48.0 us, 10.2 sy, 0.0 ni, 41.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st&lt;BR /&gt;%Cpu13 : 44.4 us, 11.1 sy, 0.0 ni, 44.4 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st&lt;BR /&gt;%Cpu14 : 44.4 us, 11.1 sy, 0.0 ni, 44.4 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st&lt;BR /&gt;%Cpu15 : 36.4 us, 10.1 sy, 0.0 ni, 53.5 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st&lt;BR /&gt;%Cpu16 : 0.0 us, 2.9 sy, 0.0 ni, 97.1 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st&lt;BR /&gt;%Cpu17 : 2.0 us, 4.0 sy, 0.0 ni, 94.1 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st&lt;BR /&gt;%Cpu18 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st&lt;BR /&gt;%Cpu19 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st&lt;BR /&gt;%Cpu20 : 45.0 us, 13.0 sy, 0.0 ni, 42.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st&lt;BR /&gt;%Cpu21 : 45.5 us, 5.0 sy, 0.0 ni, 49.5 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st&lt;BR /&gt;%Cpu22 : 92.2 us, 7.8 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st&lt;BR /&gt;%Cpu23 : 56.6 us, 9.1 sy, 0.0 ni, 34.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st&lt;BR /&gt;%Cpu24 : 41.6 us, 6.9 sy, 0.0 ni, 51.5 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st&lt;BR /&gt;%Cpu25 : 35.4 us, 8.1 sy, 0.0 ni, 56.6 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st&lt;BR /&gt;%Cpu26 : 44.0 us, 6.0 sy, 0.0 ni, 50.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st&lt;BR /&gt;%Cpu27 : 37.0 us, 7.0 sy, 0.0 ni, 56.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st&lt;BR /&gt;%Cpu28 : 49.5 us, 7.9 sy, 0.0 ni, 42.6 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st&lt;BR /&gt;%Cpu29 : 40.4 us, 10.1 sy, 0.0 ni, 49.5 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st&lt;BR /&gt;%Cpu30 : 37.4 us, 7.1 sy, 0.0 ni, 55.6 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st&lt;BR /&gt;%Cpu31 : 36.1 us, 7.2 sy, 0.0 ni, 56.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st&lt;BR /&gt;KiB Mem : 65487764 total, 43391232 free, 13066104 used, 9030428 buff/cache&lt;BR /&gt;KiB Swap: 33554300 total, 33554300 free, 0 used. 50347312 avail Mem&lt;/P&gt;&lt;P&gt;PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ P COMMAND&lt;BR /&gt;80054 admin 0 -20 11.638g 7.949g 1.257g R 99.0 12.7 15164:03 5 fwk1_18&lt;BR /&gt;80048 admin 0 -20 11.638g 7.949g 1.257g R 98.1 12.7 12480:46 23 fwk1_12&lt;BR /&gt;80037 admin 0 -20 11.638g 7.949g 1.257g R 81.0 12.7 12417:08 30 fwk1_2&lt;BR /&gt;80039 admin 0 -20 11.638g 7.949g 1.257g R 76.2 12.7 12754:39 11 fwk1_4&lt;BR /&gt;80038 admin 0 -20 11.638g 7.949g 1.257g R 74.3 12.7 12898:10 7 fwk1_3&lt;BR /&gt;80050 admin 0 -20 11.638g 7.949g 1.257g R 72.4 12.7 11821:57 6 fwk1_14&lt;BR /&gt;80041 admin 0 -20 11.638g 7.949g 1.257g R 71.4 12.7 11956:38 13 fwk1_6&lt;BR /&gt;80043 admin 0 -20 11.638g 7.949g 1.257g R 71.4 12.7 12132:27 9 fwk1_8&lt;BR /&gt;80053 admin 0 -20 11.638g 7.949g 1.257g R 70.5 12.7 10948:32 14 fwk1_17&lt;BR /&gt;80058 admin 0 -20 11.638g 7.949g 1.257g S 69.5 12.7 10348:42 4 fwk1_22&lt;BR /&gt;80044 admin 0 -20 11.638g 7.949g 1.257g R 67.6 12.7 11733:59 4 fwk1_9&lt;BR /&gt;80049 admin 0 -20 11.638g 7.949g 1.257g R 67.6 12.7 11876:22 31 fwk1_13&lt;BR /&gt;81284 admin 20 0 884920 378136 42140 R 67.6 0.6 10958:41 20 fw_full&lt;BR /&gt;80036 admin 0 -20 11.638g 7.949g 1.257g R 66.7 12.7 11650:54 12 fwk1_1&lt;BR /&gt;80040 admin 0 -20 11.638g 7.949g 1.257g R 66.7 12.7 12721:16 10 fwk1_5&lt;BR /&gt;80035 admin 0 -20 11.638g 7.949g 1.257g S 65.7 12.7 10577:59 25 fwk1_0&lt;BR /&gt;80045 admin 0 -20 11.638g 7.949g 1.257g R 64.8 12.7 12111:29 21 fwk1_10&lt;BR /&gt;80046 admin 0 -20 11.638g 7.949g 1.257g R 64.8 12.7 12112:36 15 fwk1_11&lt;BR /&gt;80059 admin 0 -20 11.638g 7.949g 1.257g S 64.8 12.7 10223:50 28 fwk1_23&lt;BR /&gt;80055 admin 0 -20 11.638g 7.949g 1.257g S 63.8 12.7 11010:59 24 fwk1_19&lt;BR /&gt;80051 admin 0 -20 11.638g 7.949g 1.257g R 61.9 12.7 11766:18 15 fwk1_15&lt;BR /&gt;80052 admin 0 -20 11.638g 7.949g 1.257g R 61.9 12.7 10741:05 27 fwk1_16&lt;BR /&gt;80042 admin 0 -20 11.638g 7.949g 1.257g R 60.0 12.7 11906:00 22 fwk1_7&lt;BR /&gt;80056 admin 0 -20 11.638g 7.949g 1.257g S 60.0 12.7 10474:53 14 fwk1_20&lt;BR /&gt;80057 admin 0 -20 11.638g 7.949g 1.257g S 56.2 12.7 11162:50 26 fwk1_21&lt;/P&gt;</description>
      <pubDate>Tue, 03 Nov 2020 15:01:21 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Firewall-and-Security-Management/VSX-Performance-issue-on-r80-30-take-219-3-10/m-p/101038#M8508</guid>
      <dc:creator>Khalid_Aftas</dc:creator>
      <dc:date>2020-11-03T15:01:21Z</dc:date>
    </item>
    <item>
      <title>Re: VSX Performance issue on r80.30 take 219 (3.10)</title>
      <link>https://community.checkpoint.com/t5/Firewall-and-Security-Management/VSX-Performance-issue-on-r80-30-take-219-3-10/m-p/101097#M8509</link>
      <description>&lt;P&gt;after 1 day of having fast accel staticly accelerating some traffic (mainly smb/https) the situation is way better on the cpu, 25% decrease overall.&lt;/P&gt;&lt;P&gt;So it means that this heavy traffic (Datacenter core, a lot of file transfer, and web) is not correctly handeled by r80.30 code ?&lt;/P&gt;&lt;P&gt;Workaround is acceptable in short term only, as we loose all the security with this bypass.&lt;/P&gt;</description>
      <pubDate>Wed, 04 Nov 2020 10:16:53 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Firewall-and-Security-Management/VSX-Performance-issue-on-r80-30-take-219-3-10/m-p/101097#M8509</guid>
      <dc:creator>Khalid_Aftas</dc:creator>
      <dc:date>2020-11-04T10:16:53Z</dc:date>
    </item>
    <item>
      <title>Re: VSX Performance issue on r80.30 take 219 (3.10)</title>
      <link>https://community.checkpoint.com/t5/Firewall-and-Security-Management/VSX-Performance-issue-on-r80-30-take-219-3-10/m-p/101323#M8510</link>
      <description>&lt;P&gt;&lt;a href="https://community.checkpoint.com/t5/user/viewprofilepage/user-id/597"&gt;@Timothy_Hall&lt;/a&gt;&amp;nbsp;&lt;a href="https://community.checkpoint.com/t5/user/viewprofilepage/user-id/11456"&gt;@Kaspars_Zibarts&lt;/a&gt;&amp;nbsp;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I would like to get your view on the "lead" in the policy, because TAC again is coming back to it.&lt;/P&gt;&lt;P&gt;To the best of my knowledge, there was a major improvement in performance of rulebase matching from77.30 to 80.30, and the hardware appliance is 2x more powerfull, we would at least could pretend to have the same level of performance at worste, but triple that ?&lt;/P&gt;&lt;P&gt;The proposal to clean/re-arrange rulebase (3000 rules) for SXL templates - &amp;gt; moving rules around the rulebase has absolutely no impact on which path (SXL, PXL, F2F) the traffic takes, it only affects the formation of SecureXL Accept templates, we were runing the same policy on old hardware/77.30 without issues since 7 years.&lt;/P&gt;&lt;P&gt;Reworking the policy is a project of it's own that would take a year (carefull process of decom) seeing the criticality of that FW.&lt;/P&gt;</description>
      <pubDate>Fri, 06 Nov 2020 16:32:12 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Firewall-and-Security-Management/VSX-Performance-issue-on-r80-30-take-219-3-10/m-p/101323#M8510</guid>
      <dc:creator>Khalid_Aftas</dc:creator>
      <dc:date>2020-11-06T16:32:12Z</dc:date>
    </item>
    <item>
      <title>Re: VSX Performance issue on r80.30 take 219 (3.10)</title>
      <link>https://community.checkpoint.com/t5/Firewall-and-Security-Management/VSX-Performance-issue-on-r80-30-take-219-3-10/m-p/101368#M8511</link>
      <description>&lt;P&gt;Since the fast_accel workaround, what does the distribution of traffic in the various processing paths look like now?&amp;nbsp; &lt;STRONG&gt;fwaccel stats -s&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Please provide output of &lt;STRONG&gt;netstat -ni&lt;/STRONG&gt; run from VS0.&lt;/P&gt;
&lt;P&gt;Also assuming you only have FW and IPS blades enabled, try completely disabling IPS by unchecking the box on all VS objects and see what happens to the acceleration statistics.&amp;nbsp; At that point practically all traffic should be fully accelerated other than rulebase lookups at the start of new connections in F2F.&amp;nbsp; If you are still experiencing a lot of PXL traffic a debug will be necessary to figure out why.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If practically all traffic is accelerated yet your Firewall Workers are still showing high CPU, that would seem to indicate excessive rulebase lookup overhead.&amp;nbsp; What is the connections/sec reported on the Overview screen of &lt;STRONG&gt;cpview&lt;/STRONG&gt;?&lt;/P&gt;</description>
      <pubDate>Sat, 07 Nov 2020 14:21:00 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Firewall-and-Security-Management/VSX-Performance-issue-on-r80-30-take-219-3-10/m-p/101368#M8511</guid>
      <dc:creator>Timothy_Hall</dc:creator>
      <dc:date>2020-11-07T14:21:00Z</dc:date>
    </item>
    <item>
      <title>Re: VSX Performance issue on r80.30 take 219 (3.10)</title>
      <link>https://community.checkpoint.com/t5/Firewall-and-Security-Management/VSX-Performance-issue-on-r80-30-take-219-3-10/m-p/101518#M8512</link>
      <description>&lt;P&gt;Important note : i used fast accel with combination of source &amp;amp; destination &amp;amp; protocol for specifics known high connections, not all the traffic (that would kill the 4 SND), and saw that decrease immediatly (both performance &amp;amp; accel % to pslxl), but still 31% of traffic is going to PSLXL.&lt;/P&gt;&lt;P&gt;- IPS : full ips disabled on VS object &amp;amp; policy (fwaccel stats -r to reset counter) :&lt;/P&gt;&lt;P&gt;[Expert@EU93A0391-OSS:1]# fwaccel stats -s&lt;BR /&gt;Accelerated conns/Total conns : 18446744073709551356/18446744073709551512 (250%)&lt;BR /&gt;Accelerated pkts/Total pkts : 7153583/10400026 (68%)&lt;BR /&gt;F2Fed pkts/Total pkts : 87577/10400026 (0%)&lt;BR /&gt;F2V pkts/Total pkts : 21775/10400026 (0%)&lt;BR /&gt;CPASXL pkts/Total pkts : 0/10400026 (0%)&lt;BR /&gt;&lt;STRONG&gt;PSLXL pkts/Total pkts : 3158866/10400026 (30%)&lt;/STRONG&gt;&lt;BR /&gt;QOS inbound pkts/Total pkts : 0/10400026 (0%)&lt;BR /&gt;QOS outbound pkts/Total pkts : 0/10400026 (0%)&lt;BR /&gt;Corrected pkts/Total pkts : 0/10400026 (0%)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;2 Leads to deal with :&lt;/P&gt;&lt;P&gt;- What is this 30% going trough PSLXL ? - &amp;gt; i honestly believe that the issue of some traffic (445/443) not going to the correct path is still the issue here.&lt;/P&gt;&lt;P&gt;- Policy lookup overhead&amp;nbsp; ? - &amp;gt; seems very unlikely&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;In the past r&amp;amp;d was able to dump the content of linux process of the Fwk to pinpoint what was causing the high cpu, i hope the TAC will finaly involve them.&lt;/P&gt;</description>
      <pubDate>Mon, 09 Nov 2020 14:00:45 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Firewall-and-Security-Management/VSX-Performance-issue-on-r80-30-take-219-3-10/m-p/101518#M8512</guid>
      <dc:creator>Khalid_Aftas</dc:creator>
      <dc:date>2020-11-09T14:00:45Z</dc:date>
    </item>
  </channel>
</rss>

