<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Occasional packet loss due to CPU Spike on R77.30 in Firewall and Security Management</title>
    <link>https://community.checkpoint.com/t5/Firewall-and-Security-Management/Occasional-packet-loss-due-to-CPU-Spike-on-R77-30/m-p/96593#M7488</link>
    <description>&lt;P&gt;With only 4 queues and 8 SND/IRQ cores, you must be using the igb driver which is limited to 4 cores for Multi-Queue, correct?&amp;nbsp; (&lt;STRONG&gt;ethtool -i&amp;nbsp; [interfacename]&lt;/STRONG&gt; to check)&lt;/P&gt;
&lt;P&gt;Please provide output of &lt;STRONG&gt;netstat -ni&lt;/STRONG&gt;, and &lt;STRONG&gt;ethtool -S [interfacename]&lt;/STRONG&gt; for the 2 busy 10Gbps interfaces.&amp;nbsp; Also provide output of &lt;STRONG&gt;sar -n EDEV&amp;nbsp;&lt;/STRONG&gt;so we can see how packet loss is occurring over time.&lt;/P&gt;
&lt;P&gt;Do you have Aggressive Aging under IPS enabled?&amp;nbsp; If so try disabling it and see if it helps.&lt;/P&gt;
&lt;P&gt;Do not modify the interface buffer sizes yet, as all that is doing is addressing a symptom of the problem.&lt;/P&gt;</description>
    <pubDate>Fri, 11 Sep 2020 16:59:57 GMT</pubDate>
    <dc:creator>Timothy_Hall</dc:creator>
    <dc:date>2020-09-11T16:59:57Z</dc:date>
    <item>
      <title>Occasional packet loss due to CPU Spike on R77.30</title>
      <link>https://community.checkpoint.com/t5/Firewall-and-Security-Management/Occasional-packet-loss-due-to-CPU-Spike-on-R77-30/m-p/96580#M7487</link>
      <description>&lt;P&gt;Platform: 23500&lt;BR /&gt;Version: R77.30 T216&lt;BR /&gt;Hyperthreading: Disabled&lt;BR /&gt;Blades: Only FW (No other blade enabled)&lt;BR /&gt;CoreXL: 8/12 Split (SND/Workers)&lt;BR /&gt;Acceleration (SXL): 98%&lt;BR /&gt;Multi Queue is enabled on two high throughput (10G) interfaces. Total of 4 queues.&lt;BR /&gt;The RX drops on these 2 interfaces are about 0.00005%.&lt;BR /&gt;TX/TX buffer on the 2 interfaces are 512/1024.&lt;/P&gt;&lt;P&gt;Typical throughput is about 1-2Gbps, occasional spikes 4-5Gbps.&lt;BR /&gt;Occasional packet loss has been reported and we notice CPU spikes 80-90% at the time of packet loss. The first 4 cores (used for SND/MQ) typically runs around 40-50% but at peak throughput goes to 70-80%.&lt;BR /&gt;At the time of packet loss, we also notice some workers cores going above 80%, but this is not always the case. Sometimes at peak throughput all workers are close 100% idle, while the 4 MQ cores are over 80%.&lt;/P&gt;&lt;P&gt;This firewall is doing file transfers. Looking at the "Top Connections" in "CPview" at the time of peak traffic, we noticed some connections using 0.8 to 1.5Gbps.&lt;/P&gt;&lt;P&gt;Not sure what needs to be changed.&lt;BR /&gt;Is the upgrade to R80.20 would help?&lt;BR /&gt;What about changing the interface buffer size?&lt;BR /&gt;What about assigning more cores to MQ?&lt;/P&gt;</description>
      <pubDate>Fri, 11 Sep 2020 14:06:40 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Firewall-and-Security-Management/Occasional-packet-loss-due-to-CPU-Spike-on-R77-30/m-p/96580#M7487</guid>
      <dc:creator>Muazzam</dc:creator>
      <dc:date>2020-09-11T14:06:40Z</dc:date>
    </item>
    <item>
      <title>Re: Occasional packet loss due to CPU Spike on R77.30</title>
      <link>https://community.checkpoint.com/t5/Firewall-and-Security-Management/Occasional-packet-loss-due-to-CPU-Spike-on-R77-30/m-p/96593#M7488</link>
      <description>&lt;P&gt;With only 4 queues and 8 SND/IRQ cores, you must be using the igb driver which is limited to 4 cores for Multi-Queue, correct?&amp;nbsp; (&lt;STRONG&gt;ethtool -i&amp;nbsp; [interfacename]&lt;/STRONG&gt; to check)&lt;/P&gt;
&lt;P&gt;Please provide output of &lt;STRONG&gt;netstat -ni&lt;/STRONG&gt;, and &lt;STRONG&gt;ethtool -S [interfacename]&lt;/STRONG&gt; for the 2 busy 10Gbps interfaces.&amp;nbsp; Also provide output of &lt;STRONG&gt;sar -n EDEV&amp;nbsp;&lt;/STRONG&gt;so we can see how packet loss is occurring over time.&lt;/P&gt;
&lt;P&gt;Do you have Aggressive Aging under IPS enabled?&amp;nbsp; If so try disabling it and see if it helps.&lt;/P&gt;
&lt;P&gt;Do not modify the interface buffer sizes yet, as all that is doing is addressing a symptom of the problem.&lt;/P&gt;</description>
      <pubDate>Fri, 11 Sep 2020 16:59:57 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Firewall-and-Security-Management/Occasional-packet-loss-due-to-CPU-Spike-on-R77-30/m-p/96593#M7488</guid>
      <dc:creator>Timothy_Hall</dc:creator>
      <dc:date>2020-09-11T16:59:57Z</dc:date>
    </item>
    <item>
      <title>Re: Occasional packet loss due to CPU Spike on R77.30</title>
      <link>https://community.checkpoint.com/t5/Firewall-and-Security-Management/Occasional-packet-loss-due-to-CPU-Spike-on-R77-30/m-p/96598#M7489</link>
      <description>&lt;P&gt;[Expert@FW1]# ethtool -i eth1-0x&lt;BR /&gt;driver: ixgbe&lt;BR /&gt;version: 3.9.15-NAPI&lt;BR /&gt;firmware-version: 0x800000cb&lt;BR /&gt;bus-info: 0000:87:00.0&lt;BR /&gt;** Same for other interface **&lt;/P&gt;&lt;P&gt;[Expert@FW1]# ethtool -S eth1-0x&lt;BR /&gt;NIC statistics:&lt;BR /&gt;rx_packets: 279704603154&lt;BR /&gt;tx_packets: 452829681289&lt;BR /&gt;rx_bytes: 18280692593016&lt;BR /&gt;tx_bytes: 133428743654784&lt;BR /&gt;rx_errors: 0&lt;BR /&gt;tx_errors: 0&lt;BR /&gt;rx_dropped: 0&lt;BR /&gt;tx_dropped: 0&lt;BR /&gt;multicast: 8824673&lt;BR /&gt;collisions: 0&lt;BR /&gt;rx_over_errors: 0&lt;BR /&gt;rx_crc_errors: 0&lt;BR /&gt;rx_frame_errors: 0&lt;BR /&gt;rx_fifo_errors: 0&lt;BR /&gt;rx_missed_errors: 264&lt;BR /&gt;tx_aborted_errors: 0&lt;BR /&gt;tx_carrier_errors: 0&lt;BR /&gt;tx_fifo_errors: 0&lt;BR /&gt;tx_heartbeat_errors: 0&lt;BR /&gt;rx_pkts_nic: 279704603154&lt;BR /&gt;tx_pkts_nic: 452829681297&lt;BR /&gt;rx_bytes_nic: 19399511347401&lt;BR /&gt;tx_bytes_nic: 135265222145354&lt;BR /&gt;lsc_int: 2&lt;BR /&gt;tx_busy: 0&lt;BR /&gt;non_eop_descs: 0&lt;BR /&gt;broadcast: 319&lt;BR /&gt;rx_no_buffer_count: 0&lt;BR /&gt;tx_timeout_count: 0&lt;BR /&gt;tx_restart_queue: 1&lt;BR /&gt;rx_long_length_errors: 0&lt;BR /&gt;rx_short_length_errors: 0&lt;BR /&gt;tx_flow_control_xon: 2&lt;BR /&gt;rx_flow_control_xon: 0&lt;BR /&gt;tx_flow_control_xoff: 2&lt;BR /&gt;rx_flow_control_xoff: 0&lt;BR /&gt;rx_csum_offload_errors: 0&lt;BR /&gt;alloc_rx_page_failed: 0&lt;BR /&gt;alloc_rx_buff_failed: 0&lt;BR /&gt;rx_no_dma_resources: 0&lt;BR /&gt;hw_rsc_aggregated: 0&lt;BR /&gt;hw_rsc_flushed: 0&lt;BR /&gt;fdir_match: 0&lt;BR /&gt;fdir_miss: 279705731752&lt;BR /&gt;fdir_overflow: 0&lt;BR /&gt;os2bmc_rx_by_bmc: 0&lt;BR /&gt;os2bmc_tx_by_bmc: 0&lt;BR /&gt;os2bmc_tx_by_host: 0&lt;BR /&gt;os2bmc_rx_by_host: 0&lt;BR /&gt;tx_queue_0_packets: 72384819744&lt;BR /&gt;tx_queue_0_bytes: 20277441221321&lt;BR /&gt;tx_queue_1_packets: 65835988286&lt;BR /&gt;tx_queue_1_bytes: 18062272500146&lt;BR /&gt;tx_queue_2_packets: 71829483754&lt;BR /&gt;tx_queue_2_bytes: 20336772417796&lt;BR /&gt;tx_queue_3_packets: 72910129484&lt;BR /&gt;tx_queue_3_bytes: 20823012955252&lt;BR /&gt;tx_queue_4_packets: 0&lt;BR /&gt;tx_queue_4_bytes: 0&lt;BR /&gt;tx_queue_5_packets: 0&lt;BR /&gt;tx_queue_5_bytes: 0&lt;BR /&gt;tx_queue_6_packets: 0&lt;BR /&gt;tx_queue_6_bytes: 0&lt;BR /&gt;tx_queue_7_packets: 0&lt;BR /&gt;tx_queue_7_bytes: 0&lt;BR /&gt;tx_queue_8_packets: 23025878448&lt;BR /&gt;tx_queue_8_bytes: 8318408460972&lt;BR /&gt;tx_queue_9_packets: 23856857091&lt;BR /&gt;tx_queue_9_bytes: 7796498083054&lt;BR /&gt;tx_queue_10_packets: 11911454580&lt;BR /&gt;tx_queue_10_bytes: 3701370972811&lt;BR /&gt;tx_queue_11_packets: 11861998596&lt;BR /&gt;tx_queue_11_bytes: 3681795471141&lt;BR /&gt;tx_queue_12_packets: 12845436216&lt;BR /&gt;tx_queue_12_bytes: 3955018588935&lt;BR /&gt;tx_queue_13_packets: 13800068566&lt;BR /&gt;tx_queue_13_bytes: 4498401641335&lt;BR /&gt;tx_queue_14_packets: 12399794438&lt;BR /&gt;tx_queue_14_bytes: 3865538912552&lt;BR /&gt;tx_queue_15_packets: 11685521160&lt;BR /&gt;tx_queue_15_bytes: 3670300606869&lt;BR /&gt;tx_queue_16_packets: 12313500819&lt;BR /&gt;tx_queue_16_bytes: 3729440172087&lt;BR /&gt;tx_queue_17_packets: 12018337900&lt;BR /&gt;tx_queue_17_bytes: 3715034617293&lt;BR /&gt;tx_queue_18_packets: 11468714242&lt;BR /&gt;tx_queue_18_bytes: 3327370364826&lt;BR /&gt;tx_queue_19_packets: 12681697974&lt;BR /&gt;tx_queue_19_bytes: 3670066669309&lt;BR /&gt;rx_queue_0_packets: 69236200991&lt;BR /&gt;rx_queue_0_bytes: 4523872267158&lt;BR /&gt;rx_queue_1_packets: 69338772648&lt;BR /&gt;rx_queue_1_bytes: 4529408746328&lt;BR /&gt;rx_queue_2_packets: 71044133302&lt;BR /&gt;rx_queue_2_bytes: 4650491437011&lt;BR /&gt;rx_queue_3_packets: 70085496214&lt;BR /&gt;rx_queue_3_bytes: 4576920142579&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;[Expert@FW1]# ethtool -S eth1-0y&lt;BR /&gt;NIC statistics:&lt;BR /&gt;rx_packets: 453012119134&lt;BR /&gt;tx_packets: 279650485293&lt;BR /&gt;rx_bytes: 133552514014343&lt;BR /&gt;tx_bytes: 16637792517188&lt;BR /&gt;rx_errors: 0&lt;BR /&gt;tx_errors: 0&lt;BR /&gt;rx_dropped: 0&lt;BR /&gt;tx_dropped: 0&lt;BR /&gt;multicast: 8822800&lt;BR /&gt;collisions: 0&lt;BR /&gt;rx_over_errors: 0&lt;BR /&gt;rx_crc_errors: 0&lt;BR /&gt;rx_frame_errors: 0&lt;BR /&gt;rx_fifo_errors: 0&lt;BR /&gt;rx_missed_errors: 86417&lt;BR /&gt;tx_aborted_errors: 0&lt;BR /&gt;tx_carrier_errors: 0&lt;BR /&gt;tx_fifo_errors: 0&lt;BR /&gt;tx_heartbeat_errors: 0&lt;BR /&gt;rx_pkts_nic: 453012119134&lt;BR /&gt;tx_pkts_nic: 279650485300&lt;BR /&gt;rx_bytes_nic: 135364608026416&lt;BR /&gt;tx_bytes_nic: 19394865904195&lt;BR /&gt;lsc_int: 2&lt;BR /&gt;tx_busy: 0&lt;BR /&gt;non_eop_descs: 0&lt;BR /&gt;broadcast: 318&lt;BR /&gt;rx_no_buffer_count: 0&lt;BR /&gt;tx_timeout_count: 0&lt;BR /&gt;tx_restart_queue: 0&lt;BR /&gt;rx_long_length_errors: 0&lt;BR /&gt;rx_short_length_errors: 0&lt;BR /&gt;tx_flow_control_xon: 160&lt;BR /&gt;rx_flow_control_xon: 0&lt;BR /&gt;tx_flow_control_xoff: 1016&lt;BR /&gt;rx_flow_control_xoff: 0&lt;BR /&gt;rx_csum_offload_errors: 1966&lt;BR /&gt;alloc_rx_page_failed: 0&lt;BR /&gt;alloc_rx_buff_failed: 0&lt;BR /&gt;rx_no_dma_resources: 0&lt;BR /&gt;hw_rsc_aggregated: 0&lt;BR /&gt;hw_rsc_flushed: 0&lt;BR /&gt;fdir_match: 0&lt;BR /&gt;fdir_miss: 453072257774&lt;BR /&gt;fdir_overflow: 0&lt;BR /&gt;os2bmc_rx_by_bmc: 0&lt;BR /&gt;os2bmc_tx_by_bmc: 0&lt;BR /&gt;os2bmc_tx_by_host: 0&lt;BR /&gt;os2bmc_rx_by_host: 0&lt;BR /&gt;tx_queue_0_packets: 38059433284&lt;BR /&gt;tx_queue_0_bytes: 2272177335740&lt;BR /&gt;tx_queue_1_packets: 46465181059&lt;BR /&gt;tx_queue_1_bytes: 2731746714242&lt;BR /&gt;tx_queue_2_packets: 39688419944&lt;BR /&gt;tx_queue_2_bytes: 2375284815784&lt;BR /&gt;tx_queue_3_packets: 39816008722&lt;BR /&gt;tx_queue_3_bytes: 2369247053218&lt;BR /&gt;tx_queue_4_packets: 0&lt;BR /&gt;tx_queue_4_bytes: 0&lt;BR /&gt;tx_queue_5_packets: 0&lt;BR /&gt;tx_queue_5_bytes: 0&lt;BR /&gt;tx_queue_6_packets: 0&lt;BR /&gt;tx_queue_6_bytes: 0&lt;BR /&gt;tx_queue_7_packets: 0&lt;BR /&gt;tx_queue_7_bytes: 0&lt;BR /&gt;tx_queue_8_packets: 9967299559&lt;BR /&gt;tx_queue_8_bytes: 585454845061&lt;BR /&gt;tx_queue_9_packets: 13888064084&lt;BR /&gt;tx_queue_9_bytes: 807454063501&lt;BR /&gt;tx_queue_10_packets: 9390683604&lt;BR /&gt;tx_queue_10_bytes: 545836699667&lt;BR /&gt;tx_queue_11_packets: 9258956063&lt;BR /&gt;tx_queue_11_bytes: 553428647074&lt;BR /&gt;tx_queue_12_packets: 8802007940&lt;BR /&gt;tx_queue_12_bytes: 523286664721&lt;BR /&gt;tx_queue_13_packets: 9155441077&lt;BR /&gt;tx_queue_13_bytes: 533689263046&lt;BR /&gt;tx_queue_14_packets: 9008473465&lt;BR /&gt;tx_queue_14_bytes: 542386786095&lt;BR /&gt;tx_queue_15_packets: 8055269948&lt;BR /&gt;tx_queue_15_bytes: 489386917462&lt;BR /&gt;tx_queue_16_packets: 9193862041&lt;BR /&gt;tx_queue_16_bytes: 560385281280&lt;BR /&gt;tx_queue_17_packets: 9396824724&lt;BR /&gt;tx_queue_17_bytes: 560785300262&lt;BR /&gt;tx_queue_18_packets: 9128944177&lt;BR /&gt;tx_queue_18_bytes: 562883460569&lt;BR /&gt;tx_queue_19_packets: 10375615609&lt;BR /&gt;tx_queue_19_bytes: 624358669844&lt;BR /&gt;rx_queue_0_packets: 115145916331&lt;BR /&gt;rx_queue_0_bytes: 34050747734185&lt;BR /&gt;rx_queue_1_packets: 109844554640&lt;BR /&gt;rx_queue_1_bytes: 32164557433634&lt;BR /&gt;rx_queue_2_packets: 112210830906&lt;BR /&gt;rx_queue_2_bytes: 32867712997960&lt;BR /&gt;rx_queue_3_packets: 115810817257&lt;BR /&gt;rx_queue_3_bytes: 34469495848564&lt;/P&gt;&lt;P&gt;[Expert@FW1]# enabled_blades&lt;BR /&gt;fw ips&lt;/P&gt;&lt;P&gt;From CPVIEW&lt;/P&gt;&lt;P&gt;Blade status Last update Number Update Time&lt;BR /&gt;Application Control disabled N/A N/A&lt;BR /&gt;Anti-Virus disabled 1109220741 09Sep2009 9:16:51&lt;BR /&gt;Anti-Bot disabled N/A N/A&lt;BR /&gt;IPS N/A N/A N/A&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The output for&amp;nbsp;sar -n EDEV is going to take too long to sanitize.&amp;nbsp;&lt;/P&gt;&lt;P&gt;IPS (although showing) in not enabled on the gateway.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 11 Sep 2020 20:55:54 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Firewall-and-Security-Management/Occasional-packet-loss-due-to-CPU-Spike-on-R77-30/m-p/96598#M7489</guid>
      <dc:creator>Muazzam</dc:creator>
      <dc:date>2020-09-11T20:55:54Z</dc:date>
    </item>
    <item>
      <title>Re: Occasional packet loss due to CPU Spike on R77.30</title>
      <link>https://community.checkpoint.com/t5/Firewall-and-Security-Management/Occasional-packet-loss-due-to-CPU-Spike-on-R77-30/m-p/96603#M7490</link>
      <description>&lt;P&gt;R77.30 is long end of support and you should upgrade.&lt;BR /&gt;Even if you're just doing basic firewall, R80.40 should be an improvement for the following reasons:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Multiqueue enabled on all supported interfaces by default with more queues available due to the newer Linux kernel.&lt;/LI&gt;
&lt;LI&gt;Dynamic Split is available (not enabled by default) which will automatically adjust the worker/snd split as needed.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Provided your management is running a recent R80.x JHF, you should be able to manage an R80.40 gateway.&lt;/P&gt;</description>
      <pubDate>Fri, 11 Sep 2020 22:49:04 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Firewall-and-Security-Management/Occasional-packet-loss-due-to-CPU-Spike-on-R77-30/m-p/96603#M7490</guid>
      <dc:creator>PhoneBoy</dc:creator>
      <dc:date>2020-09-11T22:49:04Z</dc:date>
    </item>
    <item>
      <title>Re: Occasional packet loss due to CPU Spike on R77.30</title>
      <link>https://community.checkpoint.com/t5/Firewall-and-Security-Management/Occasional-packet-loss-due-to-CPU-Spike-on-R77-30/m-p/96615#M7492</link>
      <description>&lt;P&gt;Your network traffic is getting handled pretty cleanly by your SND/IRQ cores even during high CPU load.&lt;/P&gt;
&lt;P&gt;One thing I don't understand is that the&amp;nbsp;&lt;SPAN&gt;ixgbe driver can support up to 16 queues under Gaia kernel 2.6.18, yet with your 8/12 split it only appears to be using 4 RX queues spread across CPUs 0-3 which is where you are seeing the high CPU.&amp;nbsp; Did you have a 4/16 split at one point and change it to the current 8/12?&amp;nbsp; If so you must now run &lt;STRONG&gt;cpmq reconfigure&lt;/STRONG&gt; and reboot for Multi-Queue to take advantage of all 8 SND/IRQ cores.&amp;nbsp; Please provide output of &lt;STRONG&gt;cpmq get -v&lt;/STRONG&gt;.&amp;nbsp; Note that the &lt;STRONG&gt;cpmq reconfigure&lt;/STRONG&gt; procedure is not required on Gaia kernel 3.10.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Or did you explicitly set MQ for only 4 queues?&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Sat, 12 Sep 2020 13:32:12 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Firewall-and-Security-Management/Occasional-packet-loss-due-to-CPU-Spike-on-R77-30/m-p/96615#M7492</guid>
      <dc:creator>Timothy_Hall</dc:creator>
      <dc:date>2020-09-12T13:32:12Z</dc:date>
    </item>
    <item>
      <title>Re: Occasional packet loss due to CPU Spike on R77.30</title>
      <link>https://community.checkpoint.com/t5/Firewall-and-Security-Management/Occasional-packet-loss-due-to-CPU-Spike-on-R77-30/m-p/96722#M7502</link>
      <description>&lt;P&gt;&lt;a href="https://community.checkpoint.com/t5/user/viewprofilepage/user-id/597"&gt;@Timothy_Hall&lt;/a&gt;&lt;/P&gt;&lt;P&gt;Yes, a few months ago we made a change in CoreXL and enabled MQ on the 2 interfaces. We can run&amp;nbsp;cpmq reconfigure and reboot if needed.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 14 Sep 2020 12:51:05 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Firewall-and-Security-Management/Occasional-packet-loss-due-to-CPU-Spike-on-R77-30/m-p/96722#M7502</guid>
      <dc:creator>Muazzam</dc:creator>
      <dc:date>2020-09-14T12:51:05Z</dc:date>
    </item>
    <item>
      <title>Re: Occasional packet loss due to CPU Spike on R77.30</title>
      <link>https://community.checkpoint.com/t5/Firewall-and-Security-Management/Occasional-packet-loss-due-to-CPU-Spike-on-R77-30/m-p/96723#M7503</link>
      <description>&lt;P&gt;Yes, doing so will help a lot.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 14 Sep 2020 12:53:59 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Firewall-and-Security-Management/Occasional-packet-loss-due-to-CPU-Spike-on-R77-30/m-p/96723#M7503</guid>
      <dc:creator>Timothy_Hall</dc:creator>
      <dc:date>2020-09-14T12:53:59Z</dc:date>
    </item>
  </channel>
</rss>

