<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: core affinity R80.40 - two cores in General Topics</title>
    <link>https://community.checkpoint.com/t5/General-Topics/core-affinity-R80-40-two-cores/m-p/88803#M17832</link>
    <description>&lt;P&gt;Yep, the 4600 only has 2 cores.&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Tue, 16 Jun 2020 17:00:19 GMT</pubDate>
    <dc:creator>D_TK</dc:creator>
    <dc:date>2020-06-16T17:00:19Z</dc:date>
    <item>
      <title>core affinity R80.40 - two cores</title>
      <link>https://community.checkpoint.com/t5/General-Topics/core-affinity-R80-40-two-cores/m-p/88695#M17813</link>
      <description>&lt;P&gt;did the blink R80.40 upgrade on a 4600 appliance cluster that was previously running R80.20.&amp;nbsp; On 80.20, with corexl enabled, this was the distribution, and RX-DRPs were typically close to zero.&amp;nbsp; :&lt;/P&gt;&lt;P&gt;# sim affinity -l&lt;BR /&gt;Mgmt : 0&lt;BR /&gt;eth1 : 1&lt;BR /&gt;eth2 : 1&lt;BR /&gt;eth3 : 0&lt;BR /&gt;eth5 : 0&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;After the blink upgrade, the appliances were in USFW mode which seemed strange for a 2 core box.&amp;nbsp; working with TAC, i changed them back to kernel mode and with corexl enabled, the allocation doesn't change from:&lt;/P&gt;&lt;P&gt;sim affinity -l&lt;BR /&gt;Mgmt : 0&lt;BR /&gt;eth1 : 0&lt;BR /&gt;eth2 : 0&lt;BR /&gt;eth3 : 0&lt;BR /&gt;eth5 : 0&lt;/P&gt;&lt;P&gt;fw ctl affinity -l -r&lt;BR /&gt;CPU 0: eth5 eth1 eth2 eth3 Mgmt&lt;BR /&gt;fw_1&lt;BR /&gt;CPU 1: fw_0&lt;BR /&gt;All: in.asessiond mpdaemon in.acapd usrchkd pepd in.geod rad rtmd fwd lpd vpnd pdpd cpd cprid&lt;/P&gt;&lt;P&gt;and RX-DRPs are accumulating.&lt;/P&gt;&lt;P&gt;I've never had to change corexl from "auto" mode - should i even attempt to balance the interfaces?&amp;nbsp; I haven't heard about any user&amp;nbsp; experience issues, but traffic is pretty light right now with most folks still WFH.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;thanks&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 15 Jun 2020 20:05:30 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/core-affinity-R80-40-two-cores/m-p/88695#M17813</guid>
      <dc:creator>D_TK</dc:creator>
      <dc:date>2020-06-15T20:05:30Z</dc:date>
    </item>
    <item>
      <title>Re: core affinity R80.40 - two cores</title>
      <link>https://community.checkpoint.com/t5/General-Topics/core-affinity-R80-40-two-cores/m-p/88728#M17818</link>
      <description>&lt;P&gt;Are you sure you only have two cores on your appliance? I think it has to be 4.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With auto-core assignment, and 4 cores, only one of cores serves as SND.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 16 Jun 2020 06:50:26 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/core-affinity-R80-40-two-cores/m-p/88728#M17818</guid>
      <dc:creator>_Val_</dc:creator>
      <dc:date>2020-06-16T06:50:26Z</dc:date>
    </item>
    <item>
      <title>Re: core affinity R80.40 - two cores</title>
      <link>https://community.checkpoint.com/t5/General-Topics/core-affinity-R80-40-two-cores/m-p/88803#M17832</link>
      <description>&lt;P&gt;Yep, the 4600 only has 2 cores.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 16 Jun 2020 17:00:19 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/core-affinity-R80-40-two-cores/m-p/88803#M17832</guid>
      <dc:creator>D_TK</dc:creator>
      <dc:date>2020-06-16T17:00:19Z</dc:date>
    </item>
    <item>
      <title>Re: core affinity R80.40 - two cores</title>
      <link>https://community.checkpoint.com/t5/General-Topics/core-affinity-R80-40-two-cores/m-p/88830#M17837</link>
      <description>&lt;P&gt;There is no automatic interface affinity in R80.40, at least the way it was implemented in earlier releases.&amp;nbsp; On Gaia 3.10 Multi-Queue is enabled for all interfaces (except management) which spreads the SoftIRQ/SecureXL load across all your SND cores (2) in your 2/2 split.&amp;nbsp; It is possible that the interfaces on the 4600 (a pretty old box) are not capable of Multi-Queue and that is why everything is on Core 0 instead.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;What does output of expert mode command&amp;nbsp;&lt;STRONG&gt;mq_mng --show&lt;/STRONG&gt; reveal?&amp;nbsp; If that command doesn't work try the clish command &lt;STRONG&gt;show interface (interface name) multi-queue&lt;/STRONG&gt;.&lt;/P&gt;</description>
      <pubDate>Wed, 17 Jun 2020 00:31:23 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/core-affinity-R80-40-two-cores/m-p/88830#M17837</guid>
      <dc:creator>Timothy_Hall</dc:creator>
      <dc:date>2020-06-17T00:31:23Z</dc:date>
    </item>
    <item>
      <title>Re: core affinity R80.40 - two cores</title>
      <link>https://community.checkpoint.com/t5/General-Topics/core-affinity-R80-40-two-cores/m-p/88831#M17838</link>
      <description>&lt;P&gt;Thanks Tim.&amp;nbsp; No surprise i received this message to that command: "No multiqueue supported interfaces available".&lt;/P&gt;&lt;P&gt;This cluster ran very clean on R80.20 - it has under 50 users behind it, and is fed by only a 50M MPLS link.&lt;/P&gt;&lt;P&gt;I attached a netstat -ni, the strangest part is that the interface with the enormous amount of drops (eth5) has the least amount of traffic - it's just the back-up cable modem we use for ISP redundancy.&amp;nbsp; the only traffic on it is ICMP and tunnel-test.&lt;/P&gt;&lt;P&gt;Wondering if i should cap my 2 core 4000 series boxes at R80.30.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;Thanks for any feedback.&lt;/P&gt;</description>
      <pubDate>Wed, 17 Jun 2020 01:22:42 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/core-affinity-R80-40-two-cores/m-p/88831#M17838</guid>
      <dc:creator>D_TK</dc:creator>
      <dc:date>2020-06-17T01:22:42Z</dc:date>
    </item>
    <item>
      <title>Re: core affinity R80.40 - two cores</title>
      <link>https://community.checkpoint.com/t5/General-Topics/core-affinity-R80-40-two-cores/m-p/88832#M17839</link>
      <description>&lt;P&gt;Please provide output of &lt;STRONG&gt;ethtool -S eth5&lt;/STRONG&gt; and &lt;STRONG&gt;ethtool -S eth1&lt;/STRONG&gt;.&amp;nbsp; Although uncommon, sometimes RX-DRPs are caused not by ring buffer misses/overflows, but by incoming frames carrying unknown protocols that have no registered receiver.&amp;nbsp; This reporting behavior seems to have changed in Gaia 3.10 making it more likely than it was for Gaia 2.6.18.&lt;/P&gt;</description>
      <pubDate>Wed, 17 Jun 2020 11:39:09 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/core-affinity-R80-40-two-cores/m-p/88832#M17839</guid>
      <dc:creator>Timothy_Hall</dc:creator>
      <dc:date>2020-06-17T11:39:09Z</dc:date>
    </item>
    <item>
      <title>Re: core affinity R80.40 - two cores</title>
      <link>https://community.checkpoint.com/t5/General-Topics/core-affinity-R80-40-two-cores/m-p/88833#M17840</link>
      <description>&lt;P&gt;ethtool -S eth5&lt;BR /&gt;NIC statistics:&lt;BR /&gt;rx_packets: 124212&lt;BR /&gt;tx_packets: 81970&lt;BR /&gt;rx_bytes: 12548006&lt;BR /&gt;tx_bytes: 9474872&lt;BR /&gt;rx_broadcast: 28189&lt;BR /&gt;tx_broadcast: 1660&lt;BR /&gt;rx_multicast: 9074&lt;BR /&gt;tx_multicast: 2&lt;BR /&gt;rx_errors: 0&lt;BR /&gt;tx_errors: 0&lt;BR /&gt;tx_dropped: 0&lt;BR /&gt;multicast: 9074&lt;BR /&gt;collisions: 0&lt;BR /&gt;rx_length_errors: 0&lt;BR /&gt;rx_over_errors: 0&lt;BR /&gt;rx_crc_errors: 0&lt;BR /&gt;rx_frame_errors: 0&lt;BR /&gt;rx_no_buffer_count: 0&lt;BR /&gt;rx_missed_errors: 0&lt;BR /&gt;tx_aborted_errors: 0&lt;BR /&gt;tx_carrier_errors: 0&lt;BR /&gt;tx_fifo_errors: 0&lt;BR /&gt;tx_heartbeat_errors: 0&lt;BR /&gt;tx_window_errors: 0&lt;BR /&gt;tx_abort_late_coll: 0&lt;BR /&gt;tx_deferred_ok: 0&lt;BR /&gt;tx_single_coll_ok: 0&lt;BR /&gt;tx_multi_coll_ok: 0&lt;BR /&gt;tx_timeout_count: 0&lt;BR /&gt;tx_restart_queue: 0&lt;BR /&gt;rx_long_length_errors: 0&lt;BR /&gt;rx_short_length_errors: 0&lt;BR /&gt;rx_align_errors: 0&lt;BR /&gt;tx_tcp_seg_good: 0&lt;BR /&gt;tx_tcp_seg_failed: 0&lt;BR /&gt;rx_flow_control_xon: 0&lt;BR /&gt;rx_flow_control_xoff: 0&lt;BR /&gt;tx_flow_control_xon: 0&lt;BR /&gt;tx_flow_control_xoff: 0&lt;BR /&gt;rx_csum_offload_good: 65235&lt;BR /&gt;rx_csum_offload_errors: 4&lt;BR /&gt;rx_header_split: 0&lt;BR /&gt;alloc_rx_buff_failed: 0&lt;BR /&gt;tx_smbus: 0&lt;BR /&gt;rx_smbus: 0&lt;BR /&gt;dropped_smbus: 0&lt;BR /&gt;rx_dma_failed: 0&lt;BR /&gt;tx_dma_failed: 0&lt;BR /&gt;rx_hwtstamp_cleared: 0&lt;BR /&gt;uncorr_ecc_errors: 0&lt;BR /&gt;corr_ecc_errors: 0&lt;BR /&gt;tx_hwtstamp_timeouts: 0&lt;BR /&gt;tx_hwtstamp_skipped: 0&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;ethtool -S eth1&lt;BR /&gt;NIC statistics:&lt;BR /&gt;rx_packets: 3361399&lt;BR /&gt;tx_packets: 3707821&lt;BR /&gt;rx_bytes: 1152961595&lt;BR /&gt;tx_bytes: 2555983567&lt;BR /&gt;rx_broadcast: 195818&lt;BR /&gt;tx_broadcast: 26641&lt;BR /&gt;rx_multicast: 3054&lt;BR /&gt;tx_multicast: 2&lt;BR /&gt;rx_errors: 0&lt;BR /&gt;tx_errors: 0&lt;BR /&gt;tx_dropped: 0&lt;BR /&gt;multicast: 3054&lt;BR /&gt;collisions: 0&lt;BR /&gt;rx_length_errors: 0&lt;BR /&gt;rx_over_errors: 0&lt;BR /&gt;rx_crc_errors: 0&lt;BR /&gt;rx_frame_errors: 0&lt;BR /&gt;rx_no_buffer_count: 0&lt;BR /&gt;rx_missed_errors: 0&lt;BR /&gt;tx_aborted_errors: 0&lt;BR /&gt;tx_carrier_errors: 0&lt;BR /&gt;tx_fifo_errors: 0&lt;BR /&gt;tx_heartbeat_errors: 0&lt;BR /&gt;tx_window_errors: 0&lt;BR /&gt;tx_abort_late_coll: 0&lt;BR /&gt;tx_deferred_ok: 0&lt;BR /&gt;tx_single_coll_ok: 0&lt;BR /&gt;tx_multi_coll_ok: 0&lt;BR /&gt;tx_timeout_count: 0&lt;BR /&gt;tx_restart_queue: 0&lt;BR /&gt;rx_long_length_errors: 0&lt;BR /&gt;rx_short_length_errors: 0&lt;BR /&gt;rx_align_errors: 0&lt;BR /&gt;tx_tcp_seg_good: 0&lt;BR /&gt;tx_tcp_seg_failed: 0&lt;BR /&gt;rx_flow_control_xon: 0&lt;BR /&gt;rx_flow_control_xoff: 0&lt;BR /&gt;tx_flow_control_xon: 0&lt;BR /&gt;tx_flow_control_xoff: 0&lt;BR /&gt;rx_csum_offload_good: 3150510&lt;BR /&gt;rx_csum_offload_errors: 0&lt;BR /&gt;rx_header_split: 0&lt;BR /&gt;alloc_rx_buff_failed: 0&lt;BR /&gt;tx_smbus: 0&lt;BR /&gt;rx_smbus: 0&lt;BR /&gt;dropped_smbus: 0&lt;BR /&gt;rx_dma_failed: 0&lt;BR /&gt;tx_dma_failed: 0&lt;BR /&gt;rx_hwtstamp_cleared: 0&lt;BR /&gt;uncorr_ecc_errors: 0&lt;BR /&gt;corr_ecc_errors: 0&lt;BR /&gt;tx_hwtstamp_timeouts: 0&lt;BR /&gt;tx_hwtstamp_skipped: 0&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thank You.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 17 Jun 2020 01:54:37 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/core-affinity-R80-40-two-cores/m-p/88833#M17840</guid>
      <dc:creator>D_TK</dc:creator>
      <dc:date>2020-06-17T01:54:37Z</dc:date>
    </item>
    <item>
      <title>Re: core affinity R80.40 - two cores</title>
      <link>https://community.checkpoint.com/t5/General-Topics/core-affinity-R80-40-two-cores/m-p/88834#M17841</link>
      <description>&lt;P&gt;Yep sure enough these are zero for both interfaces:&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;SPAN&gt;rx_no_buffer_count: 0&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;rx_missed_errors: 0&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;So there is no problem with ring buffer misses/overruns.&amp;nbsp; This situation was covered in my book:&lt;/SPAN&gt;&lt;/P&gt;
&lt;LI-SPOILER&gt;
&lt;P&gt;&lt;STRONG&gt;RX-DRP Culprit 1: Unknown or Undesired Protocol Type&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;In every Ethernet frame is a header field called “EtherType”. This field specifies the OSI&lt;BR /&gt;Layer 3 protocol that the Ethernet frame is carrying. A very common value for this&lt;BR /&gt;header field is 0x0800, which indicates that the frame is carrying an Internet Protocol&lt;BR /&gt;version 4 (IPv4) packet. Look at this excerpt from Stage 6 of “A Millisecond in the life&lt;BR /&gt;of a frame”:&lt;/P&gt;
&lt;P&gt;Stage 6: At a later time the CPU begins SoftIRQ processing and looks in the ring&lt;BR /&gt;buffer. If a descriptor is present, the CPU retrieves the frame from the associated&lt;BR /&gt;receive socket buffer, clears the descriptor referencing the frame in the ring&lt;BR /&gt;buffer, and sends the frame to all “registered receivers” which will be the&lt;BR /&gt;SecureXL acceleration driver. If a tcpdump capture is currently running,&lt;BR /&gt;libpcap will also be a “registered receiver” in that case and get a copy of the&lt;BR /&gt;frame as well. The SoftIRQ processing continues until all ring buffers are&lt;BR /&gt;completely emptied, or various packet count or time limits have been reached.&lt;/P&gt;
&lt;P&gt;During hardware interrupt processing, the NIC driver will examine the EtherType&lt;BR /&gt;field and verify there is a “registered receiver” present for the protocol specified in the&lt;BR /&gt;frame header. &lt;EM&gt;If there is not, the frame is discarded and RX-DRP is incremented.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;Example: an Ethernet frame arrives with an EtherType of 0x86dd indicating the&lt;BR /&gt;presence of IPv6 in the Ethernet frame. If IPv6 has not been enabled on the firewall (it is&lt;BR /&gt;off by default), the frame will be discarded by the NIC driver and RX-DRP incremented.&lt;/P&gt;
&lt;P&gt;What other protocols are known to cause this effect in the real world? Let’s take a look&lt;BR /&gt;at a brief sampling of other possible rogue EtherTypes you may see, that is by no means&lt;BR /&gt;complete:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Appletalk (0x809b)&lt;/LI&gt;
&lt;LI&gt;IPX (0x8137 or 0x8138)&lt;/LI&gt;
&lt;LI&gt;Ethernet Flow Control (0x8808) if NIC flow control is disabled&lt;/LI&gt;
&lt;LI&gt;Jumbo Frames (0x8870) if the firewall is not configured to process jumbo frames&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;BR /&gt;The dropping of these protocols for which there is no “registered receiver” does&lt;BR /&gt;cause a very small amount of overhead on the firewall during hardware interrupt&lt;BR /&gt;processing, but unless the number of frames discarded in this way exceeds 0.1% of all&lt;BR /&gt;inbound packets, you probably shouldn’t worry too much about it. An easy way to&lt;BR /&gt;confirm that the lack of a registered receiver is the cause of RX-DRPs is to perform the&lt;BR /&gt;following test:&lt;/P&gt;
&lt;P&gt;1. In a SSH or terminal window, run &lt;STRONG&gt;watch -d netstat -ni&lt;/STRONG&gt; and confirm the constant incrementing of RX-DRP on (interface).&lt;/P&gt;
&lt;P&gt;2. In a second SSH session, run &lt;STRONG&gt;tcpdump -ni (interface) host 1.1.1.1&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Does the near constant incrementing of RX-DRP on that interface suddenly stop as&lt;BR /&gt;long as the tcpdump is still running, and resume when the tcpdump is stopped? If so,&lt;BR /&gt;the lack of a registered receiver is indeed the cause of the RX-DRPs. The specified filter&lt;BR /&gt;expression (host 1.1.1.1 in our example) does not actually matter, since libpcap will&lt;/P&gt;
&lt;P&gt;register to receive all protocols on behalf of the running tcpdump, and then filter the&lt;BR /&gt;packets based on the provided tcpdump expression. So as long as the tcpdump is&lt;BR /&gt;running, there is essentially a registered received for everything.&lt;/P&gt;
&lt;P&gt;But how can we find out what these rogue protocols are, and more importantly figure&lt;BR /&gt;out where they are coming from? Run this tcpdump command to show every frame not&lt;BR /&gt;carrying IPv4 traffic or ARP traffic based on the EtherType header field:&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;tcpdump -c100 -eni (iface) not ether proto 0x0800 \&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;and not ether proto 0x0806 and not stp&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;(Note the ‘\’ at the end of line 1 of this command is a backslash and allows us to&lt;BR /&gt;continue the same command on a new line)&lt;/P&gt;
&lt;/LI-SPOILER&gt;</description>
      <pubDate>Wed, 17 Jun 2020 02:07:51 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/core-affinity-R80-40-two-cores/m-p/88834#M17841</guid>
      <dc:creator>Timothy_Hall</dc:creator>
      <dc:date>2020-06-17T02:07:51Z</dc:date>
    </item>
    <item>
      <title>Re: core affinity R80.40 - two cores</title>
      <link>https://community.checkpoint.com/t5/General-Topics/core-affinity-R80-40-two-cores/m-p/88835#M17842</link>
      <description>&lt;P&gt;Thanks Tim.&amp;nbsp; Appreciate your help.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 17 Jun 2020 02:40:34 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/core-affinity-R80-40-two-cores/m-p/88835#M17842</guid>
      <dc:creator>D_TK</dc:creator>
      <dc:date>2020-06-17T02:40:34Z</dc:date>
    </item>
  </channel>
</rss>

