<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic CRC errors on eth0 device in Spark Firewall (SMB)</title>
    <link>https://community.checkpoint.com/t5/Spark-Firewall-SMB/CRC-errors-on-eth0-device/m-p/174350#M8428</link>
    <description>&lt;P&gt;Hi All,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I posted up about a year ago in relation to unscheduled failovers between a pair of checkpoint SMB 1590s running R80.20.35. Having opened a session out with Checkpoint support we tried changing the DNS probes to using IP addresses when testing for internet connectivity. This vastly reduced the number of failovers. Within the past few weeks failovers have started to occur more regularly again. I am speaking with the customer with regards to their internet connectivity as initial investigation is showing that both nodes can display the following before failover:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;023 Mar 1 16:44:33 Gatekeeper2 user.info cposd: [CPOSD] WAN connection "Internet1": Internet connection probe status has changed to Disconnected. servers: 3, fails: 10, attempts: 30&lt;BR /&gt;2023 Mar 1 16:44:33 Gatekeeper2 daemon.info dnsmasq: reading /etc/resolv.conf&lt;BR /&gt;2023 Mar 1 16:44:33 Gatekeeper2 daemon.info dnsmasq: using nameserver 8.8.8.8#53&lt;BR /&gt;2023 Mar 1 16:44:33 Gatekeeper2 daemon.info dnsmasq: using nameserver 8.8.4.4#53&lt;BR /&gt;2023 Mar 1 16:44:33 Gatekeeper2 daemon.info dnsmasq: read /var/hosts - 31 addresses&lt;BR /&gt;2023 Mar 1 16:44:42 Gatekeeper2 daemon.info dnsmasq: reading /etc/resolv.conf&lt;BR /&gt;2023 Mar 1 16:44:42 Gatekeeper2 daemon.info dnsmasq: using nameserver 8.8.8.8#53&lt;BR /&gt;2023 Mar 1 16:44:42 Gatekeeper2 daemon.info dnsmasq: using nameserver 8.8.4.4#53&lt;BR /&gt;2023 Mar 1 16:44:42 Gatekeeper2 daemon.info dnsmasq: read /var/hosts - 31 addresses&lt;BR /&gt;2023 Mar 1 16:44:44 Gatekeeper2 daemoUpong opn.info dnsmasq: reading /ec/resolv.conf&lt;BR /&gt;2023 Mar 1 16:44:44 Gatekeeper2 daemon.info dnsmasq: using nameserver 8.8.8.8#53&lt;BR /&gt;2023 Mar 1 16:44:44 Gatekeeper2 daemon.info dnsmasq: using nameserver 8.8.4.4#53&lt;BR /&gt;2023 Mar 1 16:44:44 Gatekeeper2 daemon.info dnsmasq: read /var/hosts - 31 addresses&lt;BR /&gt;2023 Mar 1 16:44:44 Gatekeeper2 daemon.info dnsmasq: read /var/hosts - 31 addresses&lt;BR /&gt;2023 Mar 1 16:44:45 Gatekeeper2 user.info cposd: [CPOSD] WAN connection "Internet1": Internet connection probe status has changed to Connected. servers: 3, fails: 9, attempts: 30&lt;BR /&gt;2023 Mar 1 16:44:46 Gatekeeper2 user.info lua: [Security Settings] A policy change has been applied&lt;BR /&gt;2023 Mar 1 16:44:46 Gatekeeper2 user.info lua: [Security Settings] High Availability policy change has been applied&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Upon further investigation with software vendor that these were purchased from we saw in dmesg:&lt;/P&gt;&lt;P&gt;[29837928.418739] mvpp2 f2000000.ethernet eth0: bad rx status 12018514 (crc error), size=1420&lt;BR /&gt;[29838238.763245] mvpp2 f2000000.ethernet eth0: bad rx status 12018514 (crc error), size=1420&lt;BR /&gt;[29838389.930050] mvpp2 f2000000.ethernet eth0: bad rx status 12018514 (crc error), size=1420&lt;BR /&gt;[29838758.413897] mvpp2 f2000000.ethernet eth0: bad rx status 12018514 (crc error), size=1420&lt;BR /&gt;[29838807.134983] mvpp2 f2000000.ethernet eth0: bad rx status 12018514 (crc error), size=1420&lt;BR /&gt;[29838866.139433] mvpp2 f2000000.ethernet eth0: bad rx status 12018514 (crc error), size=1420&lt;BR /&gt;[29839793.944978] mvpp2 f2000000.ethernet eth0: bad rx status 13008514 (crc error), size=66&lt;BR /&gt;[29839895.013737] mvpp2 f2000000.ethernet eth0: bad rx status 12018514 (crc error), size=1420&lt;BR /&gt;[29839925.978426] mvpp2 f2000000.ethernet eth0: bad rx status 12018514 (crc error), size=1420&lt;BR /&gt;[29841751.537571] mvpp2 f2000000.ethernet eth0: bad rx status 12018514 (crc error), size=1420&lt;BR /&gt;[29841907.975809] mvpp2 f2000000.ethernet eth0: bad rx status 12018514 (crc error), size=1420&lt;BR /&gt;[29842040.630989] mvpp2 f2000000.ethernet eth0: bad rx status 12018514 (crc error), size=1420&lt;BR /&gt;[29843430.323976] mvpp2 f2000000.ethernet eth0: bad rx status 12018514 (crc error), size=1420&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;In relation to the number of total packets the number of CRCs are low&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;eth0 Link encap:Ethernet HWaddr 00:1C:7F:AE:0A:A2&lt;BR /&gt;UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1&lt;BR /&gt;RX packets:106054832559 errors:11201 dropped:0 overruns:0 frame:0&lt;BR /&gt;TX packets:105562308040 errors:0 dropped:0 overruns:0 carrier:0&lt;BR /&gt;collisions:0 txqueuelen:2048&lt;BR /&gt;RX bytes:98857319801669 (89.9 TiB) TX bytes:97218218207773 (88.4 TiB)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Now I read from a previous SK that the mac address for eth0 is the CPU interface to which the LAN1 to LAN8 devices are connected hence all interfaces have the same MAC (default behaviour) so I cannot find easily identify if the issue is caused by a LAN cable switch port etc. Additionally, none of the LAN1-8 interfaces are showing RX errors. I also read about possible *cosmetic errors* for virtual interfaces in another SK&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;As I see :&lt;/P&gt;&lt;P&gt;Event Code: CLUS-114704&lt;BR /&gt;State change: STANDBY -&amp;gt; ACTIVE&lt;BR /&gt;Reason for state change: No other ACTIVE members have been found in the cluster&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;From chpaprob state ; Is this stating that one interface cluster is not seeing traffic from it's peer / or just the sync interface perhaps?&lt;/P&gt;&lt;P&gt;Both units have been up for well over 300 days but we have an outage to patch (up to R81 possibly) and reboot within the next 2 weeks. At this time I can check any cabling. I don't know if there are any specific types of LAN cables we should be using. The issue is only been seen on one of the two units.&lt;/P&gt;&lt;P&gt;I will check&amp;nbsp; the switch statistics shortly too.&lt;/P&gt;&lt;P&gt;I was wondering if anyone had any views or experiences with this?&lt;/P&gt;&lt;P&gt;Thanks and Regards&lt;/P&gt;&lt;P&gt;Dek&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Fri, 10 Mar 2023 12:51:35 GMT</pubDate>
    <dc:creator>DekPlent</dc:creator>
    <dc:date>2023-03-10T12:51:35Z</dc:date>
    <item>
      <title>CRC errors on eth0 device</title>
      <link>https://community.checkpoint.com/t5/Spark-Firewall-SMB/CRC-errors-on-eth0-device/m-p/174350#M8428</link>
      <description>&lt;P&gt;Hi All,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I posted up about a year ago in relation to unscheduled failovers between a pair of checkpoint SMB 1590s running R80.20.35. Having opened a session out with Checkpoint support we tried changing the DNS probes to using IP addresses when testing for internet connectivity. This vastly reduced the number of failovers. Within the past few weeks failovers have started to occur more regularly again. I am speaking with the customer with regards to their internet connectivity as initial investigation is showing that both nodes can display the following before failover:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;023 Mar 1 16:44:33 Gatekeeper2 user.info cposd: [CPOSD] WAN connection "Internet1": Internet connection probe status has changed to Disconnected. servers: 3, fails: 10, attempts: 30&lt;BR /&gt;2023 Mar 1 16:44:33 Gatekeeper2 daemon.info dnsmasq: reading /etc/resolv.conf&lt;BR /&gt;2023 Mar 1 16:44:33 Gatekeeper2 daemon.info dnsmasq: using nameserver 8.8.8.8#53&lt;BR /&gt;2023 Mar 1 16:44:33 Gatekeeper2 daemon.info dnsmasq: using nameserver 8.8.4.4#53&lt;BR /&gt;2023 Mar 1 16:44:33 Gatekeeper2 daemon.info dnsmasq: read /var/hosts - 31 addresses&lt;BR /&gt;2023 Mar 1 16:44:42 Gatekeeper2 daemon.info dnsmasq: reading /etc/resolv.conf&lt;BR /&gt;2023 Mar 1 16:44:42 Gatekeeper2 daemon.info dnsmasq: using nameserver 8.8.8.8#53&lt;BR /&gt;2023 Mar 1 16:44:42 Gatekeeper2 daemon.info dnsmasq: using nameserver 8.8.4.4#53&lt;BR /&gt;2023 Mar 1 16:44:42 Gatekeeper2 daemon.info dnsmasq: read /var/hosts - 31 addresses&lt;BR /&gt;2023 Mar 1 16:44:44 Gatekeeper2 daemoUpong opn.info dnsmasq: reading /ec/resolv.conf&lt;BR /&gt;2023 Mar 1 16:44:44 Gatekeeper2 daemon.info dnsmasq: using nameserver 8.8.8.8#53&lt;BR /&gt;2023 Mar 1 16:44:44 Gatekeeper2 daemon.info dnsmasq: using nameserver 8.8.4.4#53&lt;BR /&gt;2023 Mar 1 16:44:44 Gatekeeper2 daemon.info dnsmasq: read /var/hosts - 31 addresses&lt;BR /&gt;2023 Mar 1 16:44:44 Gatekeeper2 daemon.info dnsmasq: read /var/hosts - 31 addresses&lt;BR /&gt;2023 Mar 1 16:44:45 Gatekeeper2 user.info cposd: [CPOSD] WAN connection "Internet1": Internet connection probe status has changed to Connected. servers: 3, fails: 9, attempts: 30&lt;BR /&gt;2023 Mar 1 16:44:46 Gatekeeper2 user.info lua: [Security Settings] A policy change has been applied&lt;BR /&gt;2023 Mar 1 16:44:46 Gatekeeper2 user.info lua: [Security Settings] High Availability policy change has been applied&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Upon further investigation with software vendor that these were purchased from we saw in dmesg:&lt;/P&gt;&lt;P&gt;[29837928.418739] mvpp2 f2000000.ethernet eth0: bad rx status 12018514 (crc error), size=1420&lt;BR /&gt;[29838238.763245] mvpp2 f2000000.ethernet eth0: bad rx status 12018514 (crc error), size=1420&lt;BR /&gt;[29838389.930050] mvpp2 f2000000.ethernet eth0: bad rx status 12018514 (crc error), size=1420&lt;BR /&gt;[29838758.413897] mvpp2 f2000000.ethernet eth0: bad rx status 12018514 (crc error), size=1420&lt;BR /&gt;[29838807.134983] mvpp2 f2000000.ethernet eth0: bad rx status 12018514 (crc error), size=1420&lt;BR /&gt;[29838866.139433] mvpp2 f2000000.ethernet eth0: bad rx status 12018514 (crc error), size=1420&lt;BR /&gt;[29839793.944978] mvpp2 f2000000.ethernet eth0: bad rx status 13008514 (crc error), size=66&lt;BR /&gt;[29839895.013737] mvpp2 f2000000.ethernet eth0: bad rx status 12018514 (crc error), size=1420&lt;BR /&gt;[29839925.978426] mvpp2 f2000000.ethernet eth0: bad rx status 12018514 (crc error), size=1420&lt;BR /&gt;[29841751.537571] mvpp2 f2000000.ethernet eth0: bad rx status 12018514 (crc error), size=1420&lt;BR /&gt;[29841907.975809] mvpp2 f2000000.ethernet eth0: bad rx status 12018514 (crc error), size=1420&lt;BR /&gt;[29842040.630989] mvpp2 f2000000.ethernet eth0: bad rx status 12018514 (crc error), size=1420&lt;BR /&gt;[29843430.323976] mvpp2 f2000000.ethernet eth0: bad rx status 12018514 (crc error), size=1420&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;In relation to the number of total packets the number of CRCs are low&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;eth0 Link encap:Ethernet HWaddr 00:1C:7F:AE:0A:A2&lt;BR /&gt;UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1&lt;BR /&gt;RX packets:106054832559 errors:11201 dropped:0 overruns:0 frame:0&lt;BR /&gt;TX packets:105562308040 errors:0 dropped:0 overruns:0 carrier:0&lt;BR /&gt;collisions:0 txqueuelen:2048&lt;BR /&gt;RX bytes:98857319801669 (89.9 TiB) TX bytes:97218218207773 (88.4 TiB)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Now I read from a previous SK that the mac address for eth0 is the CPU interface to which the LAN1 to LAN8 devices are connected hence all interfaces have the same MAC (default behaviour) so I cannot find easily identify if the issue is caused by a LAN cable switch port etc. Additionally, none of the LAN1-8 interfaces are showing RX errors. I also read about possible *cosmetic errors* for virtual interfaces in another SK&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;As I see :&lt;/P&gt;&lt;P&gt;Event Code: CLUS-114704&lt;BR /&gt;State change: STANDBY -&amp;gt; ACTIVE&lt;BR /&gt;Reason for state change: No other ACTIVE members have been found in the cluster&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;From chpaprob state ; Is this stating that one interface cluster is not seeing traffic from it's peer / or just the sync interface perhaps?&lt;/P&gt;&lt;P&gt;Both units have been up for well over 300 days but we have an outage to patch (up to R81 possibly) and reboot within the next 2 weeks. At this time I can check any cabling. I don't know if there are any specific types of LAN cables we should be using. The issue is only been seen on one of the two units.&lt;/P&gt;&lt;P&gt;I will check&amp;nbsp; the switch statistics shortly too.&lt;/P&gt;&lt;P&gt;I was wondering if anyone had any views or experiences with this?&lt;/P&gt;&lt;P&gt;Thanks and Regards&lt;/P&gt;&lt;P&gt;Dek&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 10 Mar 2023 12:51:35 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Spark-Firewall-SMB/CRC-errors-on-eth0-device/m-p/174350#M8428</guid>
      <dc:creator>DekPlent</dc:creator>
      <dc:date>2023-03-10T12:51:35Z</dc:date>
    </item>
    <item>
      <title>Re: CRC errors on eth0 device</title>
      <link>https://community.checkpoint.com/t5/Spark-Firewall-SMB/CRC-errors-on-eth0-device/m-p/174416#M8436</link>
      <description>&lt;P&gt;Those sorts of errors tend to be related to cabling or whatever is at the other end of it.&lt;/P&gt;</description>
      <pubDate>Fri, 10 Mar 2023 23:56:07 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Spark-Firewall-SMB/CRC-errors-on-eth0-device/m-p/174416#M8436</guid>
      <dc:creator>PhoneBoy</dc:creator>
      <dc:date>2023-03-10T23:56:07Z</dc:date>
    </item>
    <item>
      <title>Re: CRC errors on eth0 device</title>
      <link>https://community.checkpoint.com/t5/Spark-Firewall-SMB/CRC-errors-on-eth0-device/m-p/174427#M8440</link>
      <description>&lt;P&gt;Definitely agree with phoneboy...this is most likely cabling issue. I know below link is from Cisco community, but it can apply to any vendor really.&lt;/P&gt;
&lt;P&gt;Andy&lt;/P&gt;
&lt;P&gt;&lt;A href="https://community.cisco.com/t5/switching/how-to-resolve-crc-errors/td-p/2216327" target="_blank"&gt;https://community.cisco.com/t5/switching/how-to-resolve-crc-errors/td-p/2216327&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Sat, 11 Mar 2023 02:46:34 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Spark-Firewall-SMB/CRC-errors-on-eth0-device/m-p/174427#M8440</guid>
      <dc:creator>the_rock</dc:creator>
      <dc:date>2023-03-11T02:46:34Z</dc:date>
    </item>
    <item>
      <title>Re: CRC errors on eth0 device</title>
      <link>https://community.checkpoint.com/t5/Spark-Firewall-SMB/CRC-errors-on-eth0-device/m-p/174451#M8442</link>
      <description>&lt;P&gt;Probably a cabling issue, but please provide output of &lt;STRONG&gt;ethtool -S eth0&lt;/STRONG&gt; for confirmation.&lt;/P&gt;</description>
      <pubDate>Sat, 11 Mar 2023 14:20:27 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Spark-Firewall-SMB/CRC-errors-on-eth0-device/m-p/174451#M8442</guid>
      <dc:creator>Timothy_Hall</dc:creator>
      <dc:date>2023-03-11T14:20:27Z</dc:date>
    </item>
    <item>
      <title>Re: CRC errors on eth0 device</title>
      <link>https://community.checkpoint.com/t5/Spark-Firewall-SMB/CRC-errors-on-eth0-device/m-p/174555#M8453</link>
      <description>&lt;P&gt;Hi Timothy and all who responded;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The output of ethtool:&lt;/P&gt;&lt;P&gt;[Expert@Gatekeeper2]# ethtool -S eth0&lt;BR /&gt;NIC statistics:&lt;BR /&gt;good_octets_received: 100537372295696&lt;BR /&gt;bad_octets_received: 15799069&lt;BR /&gt;crc_errors_sent: 0&lt;BR /&gt;unicast_frames_received: 105806645478&lt;BR /&gt;broadcast_frames_received: 1582226201&lt;BR /&gt;multicast_frames_received: 58682701&lt;BR /&gt;frames_64_octets: 32266154150&lt;BR /&gt;frames_65_to_127_octet: 42570767442&lt;BR /&gt;frames_128_to_255_octet: 2245070861&lt;BR /&gt;frames_256_to_511_octet: 958205924&lt;BR /&gt;frames_512_to_1023_octet: 1101279521&lt;BR /&gt;frames_1024_to_max_octet: 135193459296&lt;BR /&gt;good_octets_sent: 98933885178893&lt;BR /&gt;unicast_frames_sent: 105294360565&lt;BR /&gt;multicast_frames_sent: 5&lt;BR /&gt;broadcast_frames_sent: 1593010746&lt;BR /&gt;fc_sent: 0&lt;BR /&gt;fc_received: 0&lt;BR /&gt;rx_fifo_overrun: 0&lt;BR /&gt;undersize_received: 0&lt;BR /&gt;fragments_err_received: 1&lt;BR /&gt;oversize_received: 0&lt;BR /&gt;jabber_received: 0&lt;BR /&gt;mac_receive_error: 220&lt;BR /&gt;bad_crc_event: 11279&lt;BR /&gt;collision: 0&lt;BR /&gt;late_collision: 0&lt;BR /&gt;rx_ppv2_overrun: 0&lt;BR /&gt;rx_cls_drop : 58529606&lt;BR /&gt;rx_fullq_drop : 0&lt;BR /&gt;rx_early_drop : 0&lt;BR /&gt;rx_bm_drop : 0&lt;BR /&gt;tx-guard-cpu0 : 1438088&lt;BR /&gt;tx-guard-cpu1 : 1475925&lt;BR /&gt;tx-guard-cpu2 : 1508278&lt;BR /&gt;tx-guard-cpu3 : 1508097&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;[Expert@Gatekeeper2]# uptime&lt;BR /&gt;12:45:47 up 349 days, 1:35, 2 users, load average: 0.03, 0.09, 0.06&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;This unit has 6 possible interfaces/cables which share the eth0 mac that I can investigate. None of the switch ports that this particular unit is connected to show any errors however. The are just 5 CRC errors on a switch port logged which is connected to a juniper firewall that carries traffic to and from the checkpoint however,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks and Regards&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Derek&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 13 Mar 2023 13:02:59 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Spark-Firewall-SMB/CRC-errors-on-eth0-device/m-p/174555#M8453</guid>
      <dc:creator>DekPlent</dc:creator>
      <dc:date>2023-03-13T13:02:59Z</dc:date>
    </item>
    <item>
      <title>Re: CRC errors on eth0 device</title>
      <link>https://community.checkpoint.com/t5/Spark-Firewall-SMB/CRC-errors-on-eth0-device/m-p/174560#M8454</link>
      <description>&lt;P&gt;Looks like a cabling issue, which side will show CRC errors depends on what wires in which specific pairs are bad in the current cable.&lt;/P&gt;
&lt;P&gt;The amount of CRCs is very small for 349 days of uptime so I wouldn't worry too much about it and replace the cable when you can.&amp;nbsp; Also make sure the cable is not running alongside or near any large power sources/cables due to the remote possibility of EMI, and try to use at least a Cat6 cable if you have one which is somewhat more resistant to EMI than cat5e.&lt;/P&gt;</description>
      <pubDate>Mon, 13 Mar 2023 13:18:16 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Spark-Firewall-SMB/CRC-errors-on-eth0-device/m-p/174560#M8454</guid>
      <dc:creator>Timothy_Hall</dc:creator>
      <dc:date>2023-03-13T13:18:16Z</dc:date>
    </item>
    <item>
      <title>Re: CRC errors on eth0 device</title>
      <link>https://community.checkpoint.com/t5/Spark-Firewall-SMB/CRC-errors-on-eth0-device/m-p/174562#M8455</link>
      <description>&lt;P&gt;Tim brings up a good point Derek, definitely try to use cat6 cable if you can.&lt;/P&gt;</description>
      <pubDate>Mon, 13 Mar 2023 13:20:15 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Spark-Firewall-SMB/CRC-errors-on-eth0-device/m-p/174562#M8455</guid>
      <dc:creator>the_rock</dc:creator>
      <dc:date>2023-03-13T13:20:15Z</dc:date>
    </item>
    <item>
      <title>Re: CRC errors on eth0 device</title>
      <link>https://community.checkpoint.com/t5/Spark-Firewall-SMB/CRC-errors-on-eth0-device/m-p/174573#M8457</link>
      <description>&lt;P&gt;Thanks for the advice,&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;I have requested some cat 6 cables ofr my upcoming visit to site at the end of the month.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;Just as an aside, can anyone advise if there is a 'one liner'&amp;nbsp; that provides a human readable date stamp from the dmesg time stamps please. The usual options to dmesg do not seem to be available on this appliance. I'd like to find out if there is any correlation with the errors to failover times,&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;Thanks again for your help&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Regards&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Derek&lt;/P&gt;</description>
      <pubDate>Mon, 13 Mar 2023 13:50:13 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Spark-Firewall-SMB/CRC-errors-on-eth0-device/m-p/174573#M8457</guid>
      <dc:creator>DekPlent</dc:creator>
      <dc:date>2023-03-13T13:50:13Z</dc:date>
    </item>
    <item>
      <title>Re: CRC errors on eth0 device</title>
      <link>https://community.checkpoint.com/t5/Spark-Firewall-SMB/CRC-errors-on-eth0-device/m-p/174575#M8459</link>
      <description>&lt;P&gt;Heya Derek,&lt;/P&gt;
&lt;P&gt;I cant speak for anyone else, but what I always do is either run cat or more on the file, so in this case either cat /var/log/dmesg or more /var/log/dmesg or you can log into the fw unsing winscp (as long as thats enabled with chsh command) and then nevigate to /var/log and get dmesg file to your local machine and open it that way ( I always use notepadd ++ for that purpose)&lt;/P&gt;
&lt;P&gt;Now, not certain any of those methods will give you actual timestamps, but you can try.&lt;/P&gt;
&lt;P&gt;Andy&lt;/P&gt;</description>
      <pubDate>Mon, 13 Mar 2023 14:01:36 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Spark-Firewall-SMB/CRC-errors-on-eth0-device/m-p/174575#M8459</guid>
      <dc:creator>the_rock</dc:creator>
      <dc:date>2023-03-13T14:01:36Z</dc:date>
    </item>
  </channel>
</rss>

