<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Very high RX drops on R80.40 on ESXi in Cloud Firewall</title>
    <link>https://community.checkpoint.com/t5/Cloud-Firewall/Very-high-RX-drops-on-R80-40-on-ESXi/m-p/84897#M2339</link>
    <description>&lt;P&gt;On a CheckPoint appliance running R80.40, RX-DRP from ifconfig, cpview and ethtool are in complete sync. Must be something from virtualization itself in your case.&lt;/P&gt;</description>
    <pubDate>Tue, 12 May 2020 03:50:53 GMT</pubDate>
    <dc:creator>HristoGrigorov</dc:creator>
    <dc:date>2020-05-12T03:50:53Z</dc:date>
    <item>
      <title>Very high RX drops on R80.40 on ESXi</title>
      <link>https://community.checkpoint.com/t5/Cloud-Firewall/Very-high-RX-drops-on-R80-40-on-ESXi/m-p/84873#M2336</link>
      <description>&lt;P&gt;We’ve discovered what we believe is most likely a feature in the Linux kernel used in R80.40. In a couple of ESXi environments R80.40 devices (SmartCenter and gateways) are showing &lt;EM&gt;very&lt;/EM&gt; high numbers of RX drops on some interfaces. By very I mean as high as 30% (yes, three zero %). Normally RX drops would be a real cause for concern at 1-2%.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;In a home lab I’ve played with a few things like:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;Changing the VM NIC from the (R80.40 CloudGuard management image) default of e1000 to VMXNET3 – seemed to make it worse&lt;/LI&gt;&lt;LI&gt;Increasing the RX ring buffer from the default 256 to 1024 – no difference at all&lt;/LI&gt;&lt;LI&gt;ethtool -S &amp;lt;interface&amp;gt; shows no errors&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;There’s a Red Hat article (&lt;A href="https://access.redhat.com/solutions/657483" target="_blank"&gt;https://access.redhat.com/solutions/657483&lt;/A&gt;) which suggests that this is due to new kernel code and could be matching other traffic like IPv6 frames. Based on the ethtool output it is not typical errors.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Has anyone else seen this? Can we get a clearer answer on what is going on and whether it is an issue? If this is the new “normal” it would be very useful to update sk61922 to cover this specifically for R80.40. I am not sure whether it is also an issue with R80.30 with the 3.10 kernel – I am guessing not as it would have been reported before now.&lt;/P&gt;&lt;P&gt;The following is from a lab SmartCenter running a single gateway - so not heavily loaded:&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;ifconfig&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;eth0 Link encap:Ethernet HWaddr 00:0C:29:F5:6D:8A&lt;BR /&gt;inet addr:a.b.c.d Bcast:a.b.c.255 Mask:255.255.255.0&lt;BR /&gt;UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1&lt;BR /&gt;RX packets:1108123 errors:0 &lt;STRONG&gt;dropped:329773&lt;/STRONG&gt; overruns:0 frame:0&lt;BR /&gt;TX packets:1648956 errors:0 dropped:0 overruns:0 carrier:0&lt;BR /&gt;collisions:0 txqueuelen:1000&lt;BR /&gt;RX bytes:791902533 (755.2 MiB) TX bytes:1630948344 (1.5 GiB)&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;ethtool -S&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;NIC statistics:&lt;BR /&gt;rx_packets: 1108126&lt;BR /&gt;tx_packets: 1648971&lt;BR /&gt;rx_bytes: 796332181&lt;BR /&gt;tx_bytes: 1630950498&lt;BR /&gt;rx_broadcast: 0&lt;BR /&gt;tx_broadcast: 0&lt;BR /&gt;rx_multicast: 0&lt;BR /&gt;tx_multicast: 0&lt;BR /&gt;rx_errors: 0&lt;BR /&gt;tx_errors: 0&lt;BR /&gt;tx_dropped: 0&lt;BR /&gt;multicast: 0&lt;BR /&gt;collisions: 0&lt;BR /&gt;rx_length_errors: 0&lt;BR /&gt;rx_over_errors: 0&lt;BR /&gt;rx_crc_errors: 0&lt;BR /&gt;rx_frame_errors: 0&lt;BR /&gt;rx_no_buffer_count: 0&lt;BR /&gt;rx_missed_errors: 0&lt;BR /&gt;tx_aborted_errors: 0&lt;BR /&gt;tx_carrier_errors: 0&lt;BR /&gt;tx_fifo_errors: 0&lt;BR /&gt;tx_heartbeat_errors: 0&lt;BR /&gt;tx_window_errors: 0&lt;BR /&gt;tx_abort_late_coll: 0&lt;BR /&gt;tx_deferred_ok: 0&lt;BR /&gt;tx_single_coll_ok: 0&lt;BR /&gt;tx_multi_coll_ok: 0&lt;BR /&gt;tx_timeout_count: 0&lt;BR /&gt;tx_restart_queue: 0&lt;BR /&gt;rx_long_length_errors: 0&lt;BR /&gt;rx_short_length_errors: 0&lt;BR /&gt;rx_align_errors: 0&lt;BR /&gt;tx_tcp_seg_good: 0&lt;BR /&gt;tx_tcp_seg_failed: 0&lt;BR /&gt;rx_flow_control_xon: 0&lt;BR /&gt;rx_flow_control_xoff: 0&lt;BR /&gt;tx_flow_control_xon: 0&lt;BR /&gt;tx_flow_control_xoff: 0&lt;BR /&gt;rx_long_byte_count: 796332181&lt;BR /&gt;rx_csum_offload_good: 1100864&lt;BR /&gt;rx_csum_offload_errors: 0&lt;BR /&gt;alloc_rx_buff_failed: 0&lt;BR /&gt;tx_smbus: 0&lt;BR /&gt;rx_smbus: 0&lt;BR /&gt;dropped_smbus: 0&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 11 May 2020 21:35:18 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Cloud-Firewall/Very-high-RX-drops-on-R80-40-on-ESXi/m-p/84873#M2336</guid>
      <dc:creator>Paul_Hagyard</dc:creator>
      <dc:date>2020-05-11T21:35:18Z</dc:date>
    </item>
    <item>
      <title>Re: Very high RX drops on R80.40 on ESXi</title>
      <link>https://community.checkpoint.com/t5/Cloud-Firewall/Very-high-RX-drops-on-R80-40-on-ESXi/m-p/84893#M2337</link>
      <description>&lt;P&gt;Your ifconfig and ethtool counters are not in agreement.&amp;nbsp; I would trust the ethtool counters over ifconfig (which is a deprecated command anyway), and ethtool is saying that RX "no buffer" and "misses" (which correlate to RX-DRP) are both zero.&amp;nbsp; What does command &lt;STRONG&gt;ip -s link show eth0&lt;/STRONG&gt; display?&amp;nbsp; No other ethtool counters are showing that interface as struggling whatsoever.&lt;/P&gt;</description>
      <pubDate>Tue, 12 May 2020 03:06:13 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Cloud-Firewall/Very-high-RX-drops-on-R80-40-on-ESXi/m-p/84893#M2337</guid>
      <dc:creator>Timothy_Hall</dc:creator>
      <dc:date>2020-05-12T03:06:13Z</dc:date>
    </item>
    <item>
      <title>Re: Very high RX drops on R80.40 on ESXi</title>
      <link>https://community.checkpoint.com/t5/Cloud-Firewall/Very-high-RX-drops-on-R80-40-on-ESXi/m-p/84896#M2338</link>
      <description>&lt;P&gt;The output of " ip -s link show eth0 " exactly matches the ifconfig output (dropped packets are the same, errors are both zero).&lt;/P&gt;&lt;P&gt;I'm not seeing performance issues, but showing a high number of dropped packets is new with R80.40. As per my original post, this could simply be the new kernel reporting of other drops (e.g. IPv6 etc) as shown in the Red Hat page.&lt;/P&gt;&lt;P&gt;I've changed to VMXNET3 again and (for whatever reason) it is now showing much lower drops - around 1%.&lt;/P&gt;&lt;P&gt;We've noticed this on both ESXi 7.0 and 6.0.&lt;/P&gt;</description>
      <pubDate>Tue, 12 May 2020 03:39:54 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Cloud-Firewall/Very-high-RX-drops-on-R80-40-on-ESXi/m-p/84896#M2338</guid>
      <dc:creator>Paul_Hagyard</dc:creator>
      <dc:date>2020-05-12T03:39:54Z</dc:date>
    </item>
    <item>
      <title>Re: Very high RX drops on R80.40 on ESXi</title>
      <link>https://community.checkpoint.com/t5/Cloud-Firewall/Very-high-RX-drops-on-R80-40-on-ESXi/m-p/84897#M2339</link>
      <description>&lt;P&gt;On a CheckPoint appliance running R80.40, RX-DRP from ifconfig, cpview and ethtool are in complete sync. Must be something from virtualization itself in your case.&lt;/P&gt;</description>
      <pubDate>Tue, 12 May 2020 03:50:53 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Cloud-Firewall/Very-high-RX-drops-on-R80-40-on-ESXi/m-p/84897#M2339</guid>
      <dc:creator>HristoGrigorov</dc:creator>
      <dc:date>2020-05-12T03:50:53Z</dc:date>
    </item>
    <item>
      <title>Re: Very high RX drops on R80.40 on ESXi</title>
      <link>https://community.checkpoint.com/t5/Cloud-Firewall/Very-high-RX-drops-on-R80-40-on-ESXi/m-p/84923#M2340</link>
      <description>&lt;P&gt;The "other drops" you are referring to are probably non-IPv4 packets being discarded by the Ethernet driver, does the incrementing of RX-DRP stop for as long as you are actively running a &lt;STRONG&gt;tcpdump&lt;/STRONG&gt;?&amp;nbsp; See the following from my book:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-SPOILER&gt;
&lt;P&gt;&lt;STRONG&gt;RX-DRP Culprit 1: Unknown or Undesired Protocol Type&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;In every Ethernet frame is a header field called “EtherType”. This field specifies the OSI&lt;BR /&gt;Layer 3 protocol that the Ethernet frame is carrying. A very common value for this&lt;BR /&gt;header field is 0x0800, which indicates that the frame is carrying an Internet Protocol&lt;BR /&gt;version 4 (IPv4) packet. Look at this excerpt from Stage 6 of “A Millisecond in the life&lt;BR /&gt;of a frame”:&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;&lt;STRONG&gt;Stage 6&lt;/STRONG&gt;: At a later time the CPU begins SoftIRQ processing and looks in the ring&lt;BR /&gt;buffer. If a descriptor is present, the CPU retrieves the frame from the associated&lt;BR /&gt;receive socket buffer, clears the descriptor referencing the frame in the ring&lt;BR /&gt;buffer, and sends the frame to all “registered receivers” which will be the&lt;BR /&gt;SecureXL acceleration driver. If a tcpdump capture is currently running,&lt;BR /&gt;libpcap will also be a “registered receiver” in that case and get a copy of the&lt;BR /&gt;frame as well. The SoftIRQ processing continues until all ring buffers are&lt;BR /&gt;completely emptied, or various packet count or time limits have been reached.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;During hardware interrupt processing, the NIC driver will examine the EtherType&lt;BR /&gt;field and verify there is a “registered receiver” present for the protocol specified in the&lt;BR /&gt;frame header. If there is not, the frame is discarded and RX-DRP is incremented.&lt;BR /&gt;Example: an Ethernet frame arrives with an EtherType of 0x86dd indicating the&lt;BR /&gt;presence of IPv6 in the Ethernet frame. If IPv6 has not been enabled on the firewall (it is&lt;BR /&gt;off by default), the frame will be discarded by the NIC driver and RX-DRP incremented.&lt;BR /&gt;What other protocols are known to cause this effect in the real world? Let’s take a look&lt;BR /&gt;at a brief sampling of other possible rogue EtherTypes you may see, that is by no means&lt;BR /&gt;complete:&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;- Appletalk (0x809b)&lt;BR /&gt;- IPX (0x8137 or 0x8138)&lt;BR /&gt;- Ethernet Flow Control (0x8808) if NIC flow control is disabled&lt;BR /&gt;- Jumbo Frames (0x8870) if the firewall is not configured to process jumbo frames&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;The dropping of these protocols for which there is no “registered receiver” does&lt;BR /&gt;cause a very small amount of overhead on the firewall during hardware interrupt&lt;BR /&gt;processing, but unless the number of frames discarded in this way exceeds 0.1% of all&lt;BR /&gt;inbound packets, you probably shouldn’t worry too much about it. An easy way to&lt;BR /&gt;confirm that the lack of a registered receiver is the cause of RX-DRPs is to perform the&lt;BR /&gt;following test:&lt;/P&gt;
&lt;P&gt;1. In a SSH or terminal window, run &lt;STRONG&gt;watch -d netstat -ni&lt;/STRONG&gt; and confirm&lt;BR /&gt;the constant incrementing of RX-DRP on (interface).&lt;BR /&gt;2. In a second SSH session, run &lt;STRONG&gt;tcpdump -ni (interface) host&amp;nbsp;&lt;/STRONG&gt;&lt;STRONG&gt;1.1.1.1&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Does the near constant incrementing of RX-DRP on that interface suddenly stop as&lt;BR /&gt;long as the tcpdump is still running, and resume when the tcpdump is stopped? If so,&lt;BR /&gt;the lack of a registered receiver is indeed the cause of the RX-DRPs. The specified filter&lt;BR /&gt;expression (host 1.1.1.1 in our example) does not actually matter, since libpcap will&lt;/P&gt;
&lt;P&gt;register to receive all protocols on behalf of the running tcpdump, and then filter the&lt;BR /&gt;packets based on the provided tcpdump expression. So as long as the tcpdump is&lt;BR /&gt;running, there is essentially a registered received for &lt;EM&gt;everything&lt;/EM&gt;.&lt;/P&gt;
&lt;/LI-SPOILER&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 12 May 2020 12:04:37 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Cloud-Firewall/Very-high-RX-drops-on-R80-40-on-ESXi/m-p/84923#M2340</guid>
      <dc:creator>Timothy_Hall</dc:creator>
      <dc:date>2020-05-12T12:04:37Z</dc:date>
    </item>
    <item>
      <title>Re: Very high RX drops on R80.40 on ESXi</title>
      <link>https://community.checkpoint.com/t5/Cloud-Firewall/Very-high-RX-drops-on-R80-40-on-ESXi/m-p/84977#M2341</link>
      <description>&lt;P&gt;This issue is also apparent on appliances too, I do not believe it is as a result of ESXi configuration.&lt;/P&gt;&lt;P&gt;I have a 3200 appliance running in standalone mode that is connected to my home network but at present is not passing any traffic, just simply connected to my network. Since rebooting it yesterday morning, "ifconfig Mgmt" reports RX packets of 823818 packets, dropped packets of 150786, ie 18.3% of RX packets received were dropped. "ip -s link show Mgmt" shows identical figures.&lt;/P&gt;&lt;P&gt;The interface is connected to my switch via an access port, ie no vlan tagging is active on the switch port.&amp;nbsp;&lt;/P&gt;&lt;P&gt;This morning I created a new vlan on my switch and made the switch port a member of that vlan, effectively isolating the appliance from the rest of the network. Having done so, the RX drops have stopped.&lt;/P&gt;&lt;P&gt;This seems to concur with the root cause as in the \&amp;nbsp; Redhat article that Paul referred to in the original post where it is stated that&amp;nbsp;&lt;SPAN&gt;the RHEL7 kernel contains code that updates the rx_dropped counter for other non-error conditions, eg "Packets received with unknown or unregistered protocols". When the gateway is on active vlan tcpdump reveals a high level of shows a lot of "ethertype Unknown" packet, these are from things like access point broadcasts, Sonos speakers etc. &lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;While it could be argued that this is academic, when attempting to isolate a performance problems, often packet drops would be used as an indicator of an issue, and if ifconfig, ip and cpview are all reporting a consistent message in terms of drops, how can this easily be discounted?&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;As Paul has suggested, as an absolute minimum, sk61922&amp;nbsp;should be updated to advise of this change in behaviour for R80.40.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 12 May 2020 19:58:32 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Cloud-Firewall/Very-high-RX-drops-on-R80-40-on-ESXi/m-p/84977#M2341</guid>
      <dc:creator>Greg_Harbers</dc:creator>
      <dc:date>2020-05-12T19:58:32Z</dc:date>
    </item>
    <item>
      <title>Re: Very high RX drops on R80.40 on ESXi</title>
      <link>https://community.checkpoint.com/t5/Cloud-Firewall/Very-high-RX-drops-on-R80-40-on-ESXi/m-p/84984#M2342</link>
      <description>&lt;P&gt;Yes, the drops stop immediately with the tcpdump. This suggests it is indeed other packets as per the Red Hat article - and your comments. Given that cpview on R80.40 reports the drops the same way as ifconfig (the number matches), I think it makes sense for sk61922 to be updated to clarify that this is expected behaviour.&lt;/P&gt;</description>
      <pubDate>Tue, 12 May 2020 20:45:15 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Cloud-Firewall/Very-high-RX-drops-on-R80-40-on-ESXi/m-p/84984#M2342</guid>
      <dc:creator>Paul_Hagyard</dc:creator>
      <dc:date>2020-05-12T20:45:15Z</dc:date>
    </item>
    <item>
      <title>Re: Very high RX drops on R80.40 on ESXi</title>
      <link>https://community.checkpoint.com/t5/Cloud-Firewall/Very-high-RX-drops-on-R80-40-on-ESXi/m-p/84988#M2343</link>
      <description>&lt;P&gt;RHEL5 (2.6.18) did the same thing with RX-DRPs, and it was mentioned in the first edition of my book way back in 2015 then expanded upon in the third edition with the &lt;STRONG&gt;tcpdump&lt;/STRONG&gt; trick, since I kept seeing it over and over.&amp;nbsp; Perhaps RHEL7 (3.10) has changed how it counts them or something but this has been going on for a long time.&lt;/P&gt;
&lt;P&gt;For sk61922 it should be mentioned that if there are RX-DRPs, yet no corresponding increase in counters rx_no_buffer_count and/or rx_missed_errors, it is just irrelevant traffic getting dropped by the NIC driver and is not a performance concern. This will also affect &lt;a href="https://community.checkpoint.com/t5/user/viewprofilepage/user-id/21670"&gt;@HeikoAnkenbrand&lt;/a&gt;'s tool that counts RX-DRPs in isolation and alerts if they are above 0.1%.&amp;nbsp; Tagging the master of SK's &lt;a href="https://community.checkpoint.com/t5/user/viewprofilepage/user-id/7"&gt;@PhoneBoy&lt;/a&gt; for an update request.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 12 May 2020 21:43:44 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Cloud-Firewall/Very-high-RX-drops-on-R80-40-on-ESXi/m-p/84988#M2343</guid>
      <dc:creator>Timothy_Hall</dc:creator>
      <dc:date>2020-05-12T21:43:44Z</dc:date>
    </item>
    <item>
      <title>Re: Very high RX drops on R80.40 on ESXi</title>
      <link>https://community.checkpoint.com/t5/Cloud-Firewall/Very-high-RX-drops-on-R80-40-on-ESXi/m-p/85008#M2344</link>
      <description>&lt;P&gt;I was talking that on CheckPoint appliance values for RX-DRP are the same across all the tools, not that it is not possible to have RX-DRPs for the same reason as on when it is ESXi.&lt;/P&gt;</description>
      <pubDate>Wed, 13 May 2020 03:24:00 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Cloud-Firewall/Very-high-RX-drops-on-R80-40-on-ESXi/m-p/85008#M2344</guid>
      <dc:creator>HristoGrigorov</dc:creator>
      <dc:date>2020-05-13T03:24:00Z</dc:date>
    </item>
    <item>
      <title>Re: Very high RX drops on R80.40 on ESXi</title>
      <link>https://community.checkpoint.com/t5/Cloud-Firewall/Very-high-RX-drops-on-R80-40-on-ESXi/m-p/85029#M2345</link>
      <description>&lt;P&gt;Thanks &lt;a href="https://community.checkpoint.com/t5/user/viewprofilepage/user-id/597"&gt;@Timothy_Hall&lt;/a&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;here are the links to the oneliners:&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="lia-message-read"&gt;&lt;A href="https://community.checkpoint.com/t5/General-Topics/ONLINER-Interfaces-with-RX-ERR-RX-DRP-and-RX-OVR-Errors/td-p/81914" target="_self"&gt;- ONELINER - Interfaces with RX-ERR, RX-DRP and RX-OVR Errors&lt;/A&gt;&amp;nbsp;&lt;BR /&gt;&lt;A href="https://community.checkpoint.com/t5/General-Topics/ONELINER-All-Physical-Interface-States-in-one-Overview/td-p/82011" target="_self"&gt;- ONELINER - All Physical Interface States in one Overview&lt;/A&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 13 May 2020 08:13:36 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Cloud-Firewall/Very-high-RX-drops-on-R80-40-on-ESXi/m-p/85029#M2345</guid>
      <dc:creator>HeikoAnkenbrand</dc:creator>
      <dc:date>2020-05-13T08:13:36Z</dc:date>
    </item>
    <item>
      <title>Re: Very high RX drops on R80.40 on ESXi</title>
      <link>https://community.checkpoint.com/t5/Cloud-Firewall/Very-high-RX-drops-on-R80-40-on-ESXi/m-p/85090#M2346</link>
      <description>&lt;P&gt;Actually the person you should tag is&amp;nbsp;&lt;a href="https://community.checkpoint.com/t5/user/viewprofilepage/user-id/8050"&gt;@Ronen_Zel&lt;/a&gt;&amp;nbsp;&lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 13 May 2020 16:20:06 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Cloud-Firewall/Very-high-RX-drops-on-R80-40-on-ESXi/m-p/85090#M2346</guid>
      <dc:creator>PhoneBoy</dc:creator>
      <dc:date>2020-05-13T16:20:06Z</dc:date>
    </item>
    <item>
      <title>Re: Very high RX drops on R80.40 on ESXi</title>
      <link>https://community.checkpoint.com/t5/Cloud-Firewall/Very-high-RX-drops-on-R80-40-on-ESXi/m-p/96945#M2347</link>
      <description>&lt;P&gt;Hi there,&lt;/P&gt;&lt;P&gt;I have the same issue after migrating an open server fro R80.30 to R80.40.&lt;BR /&gt;There are increasing numbers of rx-drops on only one of the interfaces, and no sign of error with ethtool.&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="eth1-drops.png" style="width: 999px;"&gt;&lt;img src="https://community.checkpoint.com/t5/image/serverpage/image-id/8087i96E2952A2BC4DA2F/image-size/large?v=v2&amp;amp;px=999" role="button" title="eth1-drops.png" alt="eth1-drops.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;You can see the flat part at the where tcpdump was running for 10 minutes.&lt;/P&gt;&lt;P&gt;There are no performance impacts or other drops, but it is annoying to see errors increasing without any reason.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 17 Sep 2020 12:44:36 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Cloud-Firewall/Very-high-RX-drops-on-R80-40-on-ESXi/m-p/96945#M2347</guid>
      <dc:creator>Dilian_Chernev</dc:creator>
      <dc:date>2020-09-17T12:44:36Z</dc:date>
    </item>
    <item>
      <title>Re: Very high RX drops on R80.40 on ESXi</title>
      <link>https://community.checkpoint.com/t5/Cloud-Firewall/Very-high-RX-drops-on-R80-40-on-ESXi/m-p/96952#M2348</link>
      <description>&lt;P&gt;Yes, the updated NIC drivers in Gaia 3.10 that you are now using seem much more likely to report unknown EtherType frames as RX-DRPs.&amp;nbsp; I don't think there is any way to disable this behavior, other than trying to keep that unknown traffic from being sent or somehow filtering it out before it reaches the firewall.&lt;/P&gt;</description>
      <pubDate>Thu, 17 Sep 2020 13:53:59 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Cloud-Firewall/Very-high-RX-drops-on-R80-40-on-ESXi/m-p/96952#M2348</guid>
      <dc:creator>Timothy_Hall</dc:creator>
      <dc:date>2020-09-17T13:53:59Z</dc:date>
    </item>
  </channel>
</rss>

