<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Issue on the sync interface in General Topics</title>
    <link>https://community.checkpoint.com/t5/General-Topics/Issue-on-the-sync-interface/m-p/30650#M6366</link>
    <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Try disable sync in service dns.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
    <pubDate>Fri, 12 Oct 2018 19:14:08 GMT</pubDate>
    <dc:creator>Jesus_Vladimir_</dc:creator>
    <dc:date>2018-10-12T19:14:08Z</dc:date>
    <item>
      <title>Issue on the sync interface</title>
      <link>https://community.checkpoint.com/t5/General-Topics/Issue-on-the-sync-interface/m-p/30640#M6356</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi guys!&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;Currently, I have one ticket opened in TAC for this case, but till now nothing...&lt;/P&gt;&lt;P&gt;Therefore I decided hear others opinions for while.hahah&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;The issue is that, my customer have a cluster 80.10 (appliance model 5800 in HA mode), where the syncronization interface between the members is through cable.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Everyday the sync interface flapping and the member 2 (in Standby) try to assume the Active state of the cluster. (in a random time of the day). And in most of the time, some VPNs falling down in same minute.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;In the /var/log/messages I get always the same log strcture:&lt;/P&gt;&lt;P&gt;"&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Sep 27 13:37:10 2018 fw02 kernel: [fw4_1];fwha_report_id_problem_status: Try to update state to DOWN due to pnote Interface Active Check (&lt;STRONG&gt;desc eth8 interface is down, 8 interfaces required, only 7 up&lt;/STRONG&gt;)&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Sep 27 13:37:10 2018 fw02 kernel: [fw4_1];FW-1: fwha_set_new_local_state: Setting state of fwha_local_id(1) to DOWN&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Sep 27 13:37:10 2018 fw02 kernel: [fw4_1];FW-1: fwha_update_local_state: Local machine state changed to DOWN&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Sep 27 13:37:10 2018 fw02 kernel: [fw4_1];fwha_state_change_implied: Try to update state to ACTIVE because member is down (the change may not be allowed).&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Sep 27 13:37:11 2018 fw02 kernel: [fw4_1];check_other_machine_activity: Update state of member id 0 to DEAD, didn't hear from it since 2021025.4 and now 2021028.4&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Sep 27 13:37:11 2018 fw02 kernel: [fw4_1];fwha_set_backup_mode: Try to update local state to ACTIVE because of ID 0 is not ACTIVE or READY. (This attempt may be blocked by other machines)&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Sep 27 13:37:11 2018 fw02 kernel: [fw4_1];FW-1: fwha_set_new_local_state: Setting state of fwha_local_id(1) to READY&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Sep 27 13:37:11 2018 fw02 kernel: [fw4_1];FW-1: fwha_update_local_state: Local machine state changed to READY&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Sep 27 13:37:11 2018 fw02 kernel: [fw4_1];FW-1: fwha_update_state: ID 0 (state ACTIVE -&amp;gt; DOWN) (time 2021028.4)&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Sep 27 13:37:11 2018 fw02 kernel: [fw4_1]; member 0 is down&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Sep 27 13:37:11 2018 fw02 kernel: [fw4_1];FW-1: fwha_state_change_implied: Try to update local state from READY to ACTIVE because all other machines confirmed my READY state&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Sep 27 13:37:11 2018 fw02 kernel: [fw4_1];FW-1: fwha_set_new_local_state: Setting state of fwha_local_id(1) to ACTIVE&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Sep 27 13:37:11 2018 fw02 kernel: [fw4_1];FW-1: fwha_update_local_state: Local machine state changed to ACTIVE&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Sep 27 13:37:12 2018 fw02 kernel: [fw4_1];fwha_report_id_problem_status: Try to update state to ACTIVE due to pnote Interface Active Check (desc &amp;lt;NULL&amp;gt;)&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Sep 27 13:37:12 2018 fw02 kernel: [fw4_1];FW-1: fwha_process_state_msg: Update state of member id 0 to ACTIVE due to the member report message&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Sep 27 13:37:12 2018 fw02 kernel: [fw4_1];fwha_set_backup_mode: Try to update local state to STANDBY because of ID 0 is ACTIVE or READY and with higher priority&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Sep 27 13:37:12 2018 fw02 kernel: [fw4_1];FW-1: fwha_set_new_local_state: Setting state of fwha_local_id(1) to STANDBY&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Sep 27 13:37:12 2018 fw02 kernel: [fw4_1];FW-1: fwha_update_local_state: Local machine state changed to STANDBY&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Sep 27 13:37:12 2018 fw02 kernel: [fw4_1];FW-1: fwha_update_state: ID 0 (state DOWN -&amp;gt; ACTIVE) (time 2021029.5)&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;" &lt;BR /&gt;Have someone any idea what can cause this behavior?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;OBS:&lt;/STRONG&gt; Until now, I did some configurations, like:&lt;/P&gt;&lt;P&gt;- Updated the jumbo_hotfix to take 121;&lt;/P&gt;&lt;P&gt;- Altered the syncronization interface from SYNC to ETH8;&lt;/P&gt;&lt;P&gt;- Switched the cable that connected the members of cluster;&lt;/P&gt;&lt;P&gt;- Changed the CCP mode from multicast to broadcast.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Thanks in advance!&lt;/STRONG&gt;&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 27 Sep 2018 18:17:02 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/Issue-on-the-sync-interface/m-p/30640#M6356</guid>
      <dc:creator>MATEUS_SALGADO</dc:creator>
      <dc:date>2018-09-27T18:17:02Z</dc:date>
    </item>
    <item>
      <title>Re: Issue on the sync interface</title>
      <link>https://community.checkpoint.com/t5/General-Topics/Issue-on-the-sync-interface/m-p/30641#M6357</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Perhaps I am missing something: can you point me to the indicator that shows Sync interface state change?&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 27 Sep 2018 19:00:31 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/Issue-on-the-sync-interface/m-p/30641#M6357</guid>
      <dc:creator>Vladimir</dc:creator>
      <dc:date>2018-09-27T19:00:31Z</dc:date>
    </item>
    <item>
      <title>Re: Issue on the sync interface</title>
      <link>https://community.checkpoint.com/t5/General-Topics/Issue-on-the-sync-interface/m-p/30642#M6358</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Did you see Danny's post&amp;nbsp;&lt;A href="https://community.checkpoint.com/thread/9624-clusterxl-improved-stability-hotfix" target="_blank"&gt;https://community.checkpoint.com/thread/9624-clusterxl-improved-stability-hotfix&lt;/A&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Also try switching sync to a different interface of that's an option to rule out some HW issues.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 21 Jun 2019 09:17:17 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/Issue-on-the-sync-interface/m-p/30642#M6358</guid>
      <dc:creator>Kaspars_Zibarts</dc:creator>
      <dc:date>2019-06-21T09:17:17Z</dc:date>
    </item>
    <item>
      <title>Re: Issue on the sync interface</title>
      <link>https://community.checkpoint.com/t5/General-Topics/Issue-on-the-sync-interface/m-p/30643#M6359</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi Vladimir,&lt;/P&gt;&lt;P&gt;I put in bold the line (:&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 27 Sep 2018 19:58:40 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/Issue-on-the-sync-interface/m-p/30643#M6359</guid>
      <dc:creator>MATEUS_SALGADO</dc:creator>
      <dc:date>2018-09-27T19:58:40Z</dc:date>
    </item>
    <item>
      <title>Re: Issue on the sync interface</title>
      <link>https://community.checkpoint.com/t5/General-Topics/Issue-on-the-sync-interface/m-p/30644#M6360</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi Kaspars,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I will take a look in Danny's post, thank you.&lt;/P&gt;&lt;P&gt;And about the interface, I already did this, switched from interface called SYNC to ETH8&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 27 Sep 2018 20:01:41 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/Issue-on-the-sync-interface/m-p/30644#M6360</guid>
      <dc:creator>MATEUS_SALGADO</dc:creator>
      <dc:date>2018-09-27T20:01:41Z</dc:date>
    </item>
    <item>
      <title>Re: Issue on the sync interface</title>
      <link>https://community.checkpoint.com/t5/General-Topics/Issue-on-the-sync-interface/m-p/30645#M6361</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Did the Hotfix help solve your issue?&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 27 Sep 2018 20:36:48 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/Issue-on-the-sync-interface/m-p/30645#M6361</guid>
      <dc:creator>Danny</dc:creator>
      <dc:date>2018-09-27T20:36:48Z</dc:date>
    </item>
    <item>
      <title>Re: Issue on the sync interface</title>
      <link>https://community.checkpoint.com/t5/General-Topics/Issue-on-the-sync-interface/m-p/30646#M6362</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Not sure if this could be the issue, but is there a chance that there is a network in your infrastructure that conflicts with the IPs assigned to the Sync?&amp;nbsp;&lt;/P&gt;&lt;P&gt;Is there a chance that there is a VPN recently setup with conflicting encryption domain?&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 27 Sep 2018 20:54:49 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/Issue-on-the-sync-interface/m-p/30646#M6362</guid>
      <dc:creator>Vladimir</dc:creator>
      <dc:date>2018-09-27T20:54:49Z</dc:date>
    </item>
    <item>
      <title>Re: Issue on the sync interface</title>
      <link>https://community.checkpoint.com/t5/General-Topics/Issue-on-the-sync-interface/m-p/30647#M6363</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Q: B&lt;SPAN style="color: #333333; background-color: #ffffff;"&gt;ut is there a chance that there is a network in your infrastructure that conflicts with the IPs assigned to the Sync?&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #333333; background-color: #ffffff;"&gt;A: No Vlad, the infra's guy reserved one network /30 for sync interface.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #333333; background-color: #ffffff;"&gt;Q:&lt;SPAN&gt;Is there a chance that there is a VPN recently setup with conflicting encryption domain?&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #333333; background-color: #ffffff;"&gt;A: I think no, because theses VPN's are old, but I will confirm with my customer.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #333333; background-color: #ffffff;"&gt;Thanks.&lt;/SPAN&gt;&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 27 Sep 2018 21:31:22 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/Issue-on-the-sync-interface/m-p/30647#M6363</guid>
      <dc:creator>MATEUS_SALGADO</dc:creator>
      <dc:date>2018-09-27T21:31:22Z</dc:date>
    </item>
    <item>
      <title>Re: Issue on the sync interface</title>
      <link>https://community.checkpoint.com/t5/General-Topics/Issue-on-the-sync-interface/m-p/30648#M6364</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi Danny!&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I don't install the hotfix yet (I need to schedule some window with my customer for this).&lt;/P&gt;&lt;P&gt;Thanks.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 27 Sep 2018 21:34:01 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/Issue-on-the-sync-interface/m-p/30648#M6364</guid>
      <dc:creator>MATEUS_SALGADO</dc:creator>
      <dc:date>2018-09-27T21:34:01Z</dc:date>
    </item>
    <item>
      <title>Re: Issue on the sync interface</title>
      <link>https://community.checkpoint.com/t5/General-Topics/Issue-on-the-sync-interface/m-p/30649#M6365</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi Danny!&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;Yesterday We try to install the fix (over jumbo take 142), but no success!&lt;/P&gt;&lt;P&gt;When the fix was installed on a member, the HA module stop work it.&lt;/P&gt;&lt;P&gt;When I ran the cphastart command, this message show up:&lt;/P&gt;&lt;P&gt;cphastart: symbol lookup error: cphastart: undefined symbol: get_cluster_interfaces&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;Even if I try through cpconfig, the module don't start.&lt;/P&gt;&lt;P&gt;I was thinking in install the jumbo take 151, cause the fix is already include on this.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Any idea about this problem?&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 10 Oct 2018 19:17:49 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/Issue-on-the-sync-interface/m-p/30649#M6365</guid>
      <dc:creator>MATEUS_SALGADO</dc:creator>
      <dc:date>2018-10-10T19:17:49Z</dc:date>
    </item>
    <item>
      <title>Re: Issue on the sync interface</title>
      <link>https://community.checkpoint.com/t5/General-Topics/Issue-on-the-sync-interface/m-p/30650#M6366</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Try disable sync in service dns.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 12 Oct 2018 19:14:08 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/Issue-on-the-sync-interface/m-p/30650#M6366</guid>
      <dc:creator>Jesus_Vladimir_</dc:creator>
      <dc:date>2018-10-12T19:14:08Z</dc:date>
    </item>
    <item>
      <title>Re: Issue on the sync interface</title>
      <link>https://community.checkpoint.com/t5/General-Topics/Issue-on-the-sync-interface/m-p/30651#M6367</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi Jesus!&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Sorry, I don't understand the answer...&lt;/P&gt;&lt;P&gt;How could I do this?&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Tue, 16 Oct 2018 18:16:48 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/Issue-on-the-sync-interface/m-p/30651#M6367</guid>
      <dc:creator>MATEUS_SALGADO</dc:creator>
      <dc:date>2018-10-16T18:16:48Z</dc:date>
    </item>
    <item>
      <title>Re: Issue on the sync interface</title>
      <link>https://community.checkpoint.com/t5/General-Topics/Issue-on-the-sync-interface/m-p/30652#M6368</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;###UPDATE###&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Yesterday we installed the newest Jumbo (take_151 ongoing version and with the&lt;A _jive_internal="true" href="https://community.checkpoint.com/thread/9624-clusterxl-improved-stability-hotfix"&gt; ClusterXL Improved Stability Hotfix&lt;/A&gt; already included ) with success on both members, but the issue remains!&lt;/P&gt;&lt;P&gt;Today the sync interface already flapping 2 times.&lt;BR /&gt;&lt;span class="lia-unicode-emoji" title=":disappointed_face:"&gt;😞&lt;/span&gt;&lt;/P&gt;&lt;P&gt;Someone have any new idea?&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Tue, 16 Oct 2018 18:25:35 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/Issue-on-the-sync-interface/m-p/30652#M6368</guid>
      <dc:creator>MATEUS_SALGADO</dc:creator>
      <dc:date>2018-10-16T18:25:35Z</dc:date>
    </item>
    <item>
      <title>Re: Issue on the sync interface</title>
      <link>https://community.checkpoint.com/t5/General-Topics/Issue-on-the-sync-interface/m-p/30653#M6369</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;### ANOTHER UPDATE ###&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Hello fellows!&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;After we installed the take_154 jumbo hotfix, problems with the VPNs have been resolved.&lt;/P&gt;&lt;P&gt;But the synchronization interface is still flapping and the cluster members continues trying to change the state.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Below, some pieces of messages log.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Flapping of sync interface + Cluster's members trying change the state&lt;/STRONG&gt;:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Nov 27 14:04:12 2018 fw02 kernel: [fw4_1];fwha_report_id_problem_status: Try to update state to DOWN due to pnote Interface Active Check (desc eth8 interface is down, 8 interfaces required, only 7 up)&lt;BR /&gt;Nov 27 14:04:12 2018 fw02 kernel: [fw4_1];FW-1: fwha_set_new_local_state: Setting state of fwha_local_id(1) to DOWN&lt;BR /&gt;Nov 27 14:04:12 2018 fw02 kernel: [fw4_1];FW-1: fwha_update_local_state: Local machine state changed to DOWN&lt;BR /&gt;Nov 27 14:04:12 2018 fw02 kernel: [fw4_1];fwha_state_change_implied: Try to update state to ACTIVE because member is down (the change may not be allowed).&lt;BR /&gt;Nov 27 14:04:13 2018 fw02 kernel: [fw4_1];check_other_machine_activity: Update state of member id 0 to DEAD, didn't hear from it since 403715.3 and now 403718.3&lt;BR /&gt;Nov 27 14:04:13 2018 fw02 kernel: [fw4_1];fwha_set_backup_mode: Try to update local state to ACTIVE because of ID 0 is not ACTIVE or READY. (This attempt may be blocked by other machines)&lt;BR /&gt;Nov 27 14:04:13 2018 fw02 kernel: [fw4_1];FW-1: fwha_set_new_local_state: Setting state of fwha_local_id(1) to READY&lt;BR /&gt;Nov 27 14:04:13 2018 fw02 kernel: [fw4_1];FW-1: fwha_update_local_state: Local machine state changed to READY&lt;BR /&gt;Nov 27 14:04:13 2018 fw02 kernel: [fw4_1];FW-1: fwha_update_state: ID 0 (state ACTIVE -&amp;gt; DOWN) (time 403718.3)&lt;BR /&gt;Nov 27 14:04:13 2018 fw02 kernel: [fw4_1]; member 0 is down&lt;BR /&gt;Nov 27 14:04:13 2018 fw02 kernel: [fw4_1];FW-1: fwha_state_change_implied: Try to update local state from READY to ACTIVE because all other machines confirmed my READY state&lt;BR /&gt;Nov 27 14:04:13 2018 fw02 kernel: [fw4_1];FW-1: fwha_set_new_local_state: Setting state of fwha_local_id(1) to ACTIVE&lt;BR /&gt;Nov 27 14:04:13 2018 fw02 kernel: [fw4_1];FW-1: fwha_update_local_state: Local machine state changed to ACTIVE&lt;BR /&gt;Nov 27 14:04:13 2018 fw02 kernel: [fw4_0];fwxlate_dyn_port_release_global_quota: kiss_ghtab_get failed&lt;BR /&gt;Nov 27 14:04:13 2018 fw02 last message repeated 5 times&lt;BR /&gt;Nov 27 14:04:14 2018 fw02 kernel: [fw4_1];fwha_report_id_problem_status: Try to update state to ACTIVE due to pnote Interface Active Check (desc &amp;lt;NULL&amp;gt;)&lt;BR /&gt;Nov 27 14:04:15 2018 fw02 kernel: [fw4_1];FW-1: fwha_process_state_msg: Update state of member id 0 to ACTIVE due to the member report message&lt;BR /&gt;Nov 27 14:04:15 2018 fw02 kernel: [fw4_1];fwha_set_backup_mode: Try to update local state to STANDBY because of ID 0 is ACTIVE or READY and with higher priority&lt;BR /&gt;Nov 27 14:04:15 2018 fw02 kernel: [fw4_1];FW-1: fwha_set_new_local_state: Setting state of fwha_local_id(1) to STANDBY&lt;BR /&gt;Nov 27 14:04:15 2018 fw02 kernel: [fw4_1];FW-1: fwha_update_local_state: Local machine state changed to STANDBY&lt;BR /&gt;Nov 27 14:04:15 2018 fw02 kernel: [fw4_1];FW-1: fwha_update_state: ID 0 (state DOWN -&amp;gt; ACTIVE) (time 403721.0)&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Cluster's member trying to change the state (without the flapping):&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;Nov 27 16:36:30 2018 fw02 kernel: [fw4_1];check_other_machine_activity: Update state of member id 0 to DEAD, didn't hear from it since 412743.5 and now 412746.5&lt;BR /&gt;Nov 27 16:36:30 2018 fw02 kernel: [fw4_1];fwha_set_backup_mode: Try to update local state to ACTIVE because of ID 0 is not ACTIVE or READY. (This attempt may be blocked by other machines)&lt;BR /&gt;Nov 27 16:36:30 2018 fw02 kernel: [fw4_1];FW-1: fwha_set_new_local_state: Setting state of fwha_local_id(1) to READY&lt;BR /&gt;Nov 27 16:36:30 2018 fw02 kernel: [fw4_1];FW-1: fwha_update_local_state: Local machine state changed to READY&lt;BR /&gt;Nov 27 16:36:30 2018 fw02 kernel: [fw4_1];FW-1: fwha_update_state: ID 0 (state ACTIVE -&amp;gt; DOWN) (time 412746.5)&lt;BR /&gt;Nov 27 16:36:30 2018 fw02 kernel: [fw4_1]; member 0 is down&lt;BR /&gt;Nov 27 16:36:30 2018 fw02 kernel: [fw4_1];FW-1: fwha_state_change_implied: Try to update local state from READY to ACTIVE because all other machines confirmed my READY state&lt;BR /&gt;Nov 27 16:36:30 2018 fw02 kernel: [fw4_1];FW-1: fwha_set_new_local_state: Setting state of fwha_local_id(1) to ACTIVE&lt;BR /&gt;Nov 27 16:36:30 2018 fw02 kernel: [fw4_1];FW-1: fwha_update_local_state: Local machine state changed to ACTIVE&lt;BR /&gt;Nov 27 16:36:30 2018 fw02 kernel: [fw4_0];fwxlate_dyn_port_release_global_quota: kiss_ghtab_get failed&lt;BR /&gt;Nov 27 16:36:30 2018 fw02 last message repeated 3 times&lt;BR /&gt;Nov 27 16:36:31 2018 fw02 kernel: [fw4_1];FW-1: fwha_process_state_msg: Update state of member id 0 to ACTIVE due to the member report message&lt;BR /&gt;Nov 27 16:36:31 2018 fw02 kernel: [fw4_1];fwha_set_backup_mode: Try to update local state to STANDBY because of ID 0 is ACTIVE or READY and with higher priority&lt;BR /&gt;Nov 27 16:36:31 2018 fw02 kernel: [fw4_1];FW-1: fwha_set_new_local_state: Setting state of fwha_local_id(1) to STANDBY&lt;BR /&gt;Nov 27 16:36:31 2018 fw02 kernel: [fw4_1];FW-1: fwha_update_local_state: Local machine state changed to STANDBY&lt;BR /&gt;Nov 27 16:36:31 2018 fw02 kernel: [fw4_1];FW-1: fwha_update_state: ID 0 (state DOWN -&amp;gt; ACTIVE) (time 412747.9)&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Tue, 27 Nov 2018 19:21:14 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/Issue-on-the-sync-interface/m-p/30653#M6369</guid>
      <dc:creator>MATEUS_SALGADO</dc:creator>
      <dc:date>2018-11-27T19:21:14Z</dc:date>
    </item>
    <item>
      <title>Re: Issue on the sync interface</title>
      <link>https://community.checkpoint.com/t5/General-Topics/Issue-on-the-sync-interface/m-p/30654#M6370</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;&lt;SPAN style="font-size: medium;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A _jive_internal="true" href="https://community.checkpoint.com/people/dfa3b1dc-b5d2-4ffd-8faf-11291da35ad0"&gt;MATEUS SALGADO&lt;/A&gt;&lt;/P&gt;&lt;P&gt;Did you try to connect both the firewall Sync interfaces back to back?&lt;/P&gt;&lt;P&gt;&amp;nbsp;At least this can be done as a test. We had a similar issue which was resolved by doing this.&amp;nbsp;&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 28 Nov 2018 09:36:58 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/Issue-on-the-sync-interface/m-p/30654#M6370</guid>
      <dc:creator>Moe_89</dc:creator>
      <dc:date>2018-11-28T09:36:58Z</dc:date>
    </item>
    <item>
      <title>Re: Issue on the sync interface</title>
      <link>https://community.checkpoint.com/t5/General-Topics/Issue-on-the-sync-interface/m-p/30655#M6371</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Your &lt;STRONG&gt;eth8&lt;/STRONG&gt;&amp;nbsp;still goes down or at least is not able to communicate between firewalls as before.&lt;/P&gt;&lt;P&gt;Have you checked switch logs on that VLAN you have connected your SYNC.&lt;/P&gt;&lt;P&gt;Is it possible that you have another firewall cluster connected to the same VLAN?&lt;/P&gt;&lt;P&gt;Straight cable (if that's an option) could give some answers too as suggested earlier.&lt;/P&gt;&lt;P&gt;If it's a time predictable flap, try to collect packet capture from eth8 to see what's going on&amp;nbsp; there&lt;/P&gt;&lt;P&gt;Why don't you make FW2 higher priority and see if that one suffers from the same symptoms?&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 28 Nov 2018 09:59:48 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/Issue-on-the-sync-interface/m-p/30655#M6371</guid>
      <dc:creator>Kaspars_Zibarts</dc:creator>
      <dc:date>2018-11-28T09:59:48Z</dc:date>
    </item>
    <item>
      <title>Re: Issue on the sync interface</title>
      <link>https://community.checkpoint.com/t5/General-Topics/Issue-on-the-sync-interface/m-p/30656#M6372</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;From expert mode run "ifconfig eth8" on the firewall, is the "carrier" value shown nonzero?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Also please post outputs of:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;ifconfig eth8&lt;/P&gt;&lt;P&gt;ethtool -S eth8&lt;/P&gt;&lt;P&gt;fw ctl pstat&lt;/P&gt;&lt;P&gt;cphaprob syncstat&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;--&lt;BR /&gt; Second Edition of my "Max Power" Firewall Book&lt;BR /&gt; Now Available at &lt;A href="http://www.maxpowerfirewalls.com" target="_blank"&gt;http://www.maxpowerfirewalls.com&lt;/A&gt;&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 28 Nov 2018 13:20:31 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/Issue-on-the-sync-interface/m-p/30656#M6372</guid>
      <dc:creator>Timothy_Hall</dc:creator>
      <dc:date>2018-11-28T13:20:31Z</dc:date>
    </item>
    <item>
      <title>Re: Issue on the sync interface</title>
      <link>https://community.checkpoint.com/t5/General-Topics/Issue-on-the-sync-interface/m-p/30657#M6373</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi guys! &lt;BR /&gt;First of all, thank you for the posts and let's go to the answers:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;A href="https://community.checkpoint.com/migrated-users/47189"&gt;Mubarizuddin Mohammed&lt;/A&gt;‌ and &lt;A href="https://community.checkpoint.com/migrated-users/47831"&gt;Kaspars Zibarts&lt;/A&gt;‌ the cluster's members is connected via s&lt;SPAN style="color: #333333; background-color: #ffffff;"&gt;traight cable since this start this post (I&amp;nbsp;only changed from interface called Sync to eth8). &lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #333333; background-color: #ffffff;"&gt;About the packet capture, I can't measure when this happen, I only know that occur in a business hour (7 AM to 7 PM).&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #333333; background-color: #ffffff;"&gt;About the cluster's priority, the gateways is configured with option "Maintain current active Cluster Member"&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #333333; background-color: #ffffff;"&gt;&lt;A href="https://community.checkpoint.com/migrated-users/41625"&gt;Timothy Hall&lt;/A&gt;‌&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #333333; background-color: #ffffff;"&gt;In both members the carrier is 0&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #333333; background-color: #ffffff;"&gt;Follow the outputs requesteds:&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG style="background-color: #ffffff; color: #333333;"&gt;ifconfig eth8&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #333333; background-color: #ffffff;"&gt;[Expert@fw01:0]# ifconfig eth8&lt;BR /&gt;eth8 Link encap:Ethernet HWaddr 00:1C:7F:81:66:91&lt;BR /&gt; inet addr:172.31.255.221 Bcast:172.31.255.223 Mask:255.255.255.252&lt;BR /&gt; UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1&lt;BR /&gt; RX packets:33119286 errors:0 dropped:0 overruns:0 frame:0&lt;BR /&gt; TX packets:242265575 errors:0 dropped:0 overruns:0 carrier:0&lt;BR /&gt; collisions:0 txqueuelen:1000&lt;BR /&gt; RX bytes:4570799625 (4.2 GiB) TX bytes:273528052825 (254.7 GiB)&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #333333; background-color: #ffffff;"&gt;[Expert@fw02:0]# ifconfig eth8&lt;BR /&gt;eth8 Link encap:Ethernet HWaddr 00:1C:7F:81:66:69&lt;BR /&gt; inet addr:172.31.255.222 Bcast:172.31.255.223 Mask:255.255.255.252&lt;BR /&gt; UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1&lt;BR /&gt; RX packets:242105389 errors:0 dropped:0 overruns:0 frame:0&lt;BR /&gt; TX packets:33023750 errors:0 dropped:0 overruns:0 carrier:0&lt;BR /&gt; collisions:0 txqueuelen:1000&lt;BR /&gt; RX bytes:273360672145 (254.5 GiB) TX bytes:4456865387 (4.1 GiB)&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG style="background-color: #ffffff; color: #333333;"&gt;ethtool -S eth8&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #333333; background-color: #ffffff;"&gt;[Expert@fw01:0]# ethtool -S eth8&lt;BR /&gt;NIC statistics:&lt;BR /&gt; rx_packets: 33123568&lt;BR /&gt; tx_packets: 242327493&lt;BR /&gt; rx_bytes: 4704099767&lt;BR /&gt; tx_bytes: 274570256636&lt;BR /&gt; rx_broadcast: 321931&lt;BR /&gt; tx_broadcast: 323124&lt;BR /&gt; rx_multicast: 32645695&lt;BR /&gt; tx_multicast: 238322690&lt;BR /&gt; multicast: 32645695&lt;BR /&gt; collisions: 0&lt;BR /&gt; rx_crc_errors: 0&lt;BR /&gt; rx_no_buffer_count: 0&lt;BR /&gt; rx_missed_errors: 0&lt;BR /&gt; tx_aborted_errors: 0&lt;BR /&gt; tx_carrier_errors: 0&lt;BR /&gt; tx_window_errors: 0&lt;BR /&gt; tx_abort_late_coll: 0&lt;BR /&gt; tx_deferred_ok: 0&lt;BR /&gt; tx_single_coll_ok: 0&lt;BR /&gt; tx_multi_coll_ok: 0&lt;BR /&gt; tx_timeout_count: 1&lt;BR /&gt; rx_long_length_errors: 0&lt;BR /&gt; rx_short_length_errors: 0&lt;BR /&gt; rx_align_errors: 0&lt;BR /&gt; tx_tcp_seg_good: 0&lt;BR /&gt; tx_tcp_seg_failed: 0&lt;BR /&gt; rx_flow_control_xon: 0&lt;BR /&gt; rx_flow_control_xoff: 0&lt;BR /&gt; tx_flow_control_xon: 0&lt;BR /&gt; tx_flow_control_xoff: 0&lt;BR /&gt; rx_long_byte_count: 4704099767&lt;BR /&gt; tx_dma_out_of_sync: 0&lt;BR /&gt; lro_aggregated: 0&lt;BR /&gt; lro_flushed: 0&lt;BR /&gt; lro_recycled: 0&lt;BR /&gt; tx_smbus: 0&lt;BR /&gt; rx_smbus: 0&lt;BR /&gt; dropped_smbus: 0&lt;BR /&gt; os2bmc_rx_by_bmc: 0&lt;BR /&gt; os2bmc_tx_by_bmc: 0&lt;BR /&gt; os2bmc_tx_by_host: 0&lt;BR /&gt; os2bmc_rx_by_host: 0&lt;BR /&gt; rx_errors: 0&lt;BR /&gt; tx_errors: 0&lt;BR /&gt; tx_dropped: 0&lt;BR /&gt; rx_length_errors: 0&lt;BR /&gt; rx_over_errors: 0&lt;BR /&gt; rx_frame_errors: 0&lt;BR /&gt; rx_fifo_errors: 0&lt;BR /&gt; tx_fifo_errors: 0&lt;BR /&gt; tx_heartbeat_errors: 0&lt;BR /&gt; tx_queue_0_packets: 242327681&lt;BR /&gt; tx_queue_0_bytes: 273600968980&lt;BR /&gt; tx_queue_0_restart: 0&lt;BR /&gt; rx_queue_0_packets: 33123568&lt;BR /&gt; rx_queue_0_bytes: 4571605495&lt;BR /&gt; rx_queue_0_drops: 0&lt;BR /&gt; rx_queue_0_csum_err: 0&lt;BR /&gt; rx_queue_0_alloc_failed: 0&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #333333; background-color: #ffffff;"&gt;[Expert@fw02:0]# ethtool -S eth8&lt;BR /&gt;NIC statistics:&lt;BR /&gt; rx_packets: 242350310&lt;BR /&gt; tx_packets: 33040613&lt;BR /&gt; rx_bytes: 274617825638&lt;BR /&gt; tx_bytes: 4592014643&lt;BR /&gt; rx_broadcast: 320793&lt;BR /&gt; tx_broadcast: 321873&lt;BR /&gt; rx_multicast: 238353795&lt;BR /&gt; tx_multicast: 32602820&lt;BR /&gt; multicast: 238353795&lt;BR /&gt; collisions: 0&lt;BR /&gt; rx_crc_errors: 0&lt;BR /&gt; rx_no_buffer_count: 0&lt;BR /&gt; rx_missed_errors: 0&lt;BR /&gt; tx_aborted_errors: 0&lt;BR /&gt; tx_carrier_errors: 0&lt;BR /&gt; tx_window_errors: 0&lt;BR /&gt; tx_abort_late_coll: 0&lt;BR /&gt; tx_deferred_ok: 0&lt;BR /&gt; tx_single_coll_ok: 0&lt;BR /&gt; tx_multi_coll_ok: 0&lt;BR /&gt; tx_timeout_count: 0&lt;BR /&gt; rx_long_length_errors: 0&lt;BR /&gt; rx_short_length_errors: 0&lt;BR /&gt; rx_align_errors: 0&lt;BR /&gt; tx_tcp_seg_good: 0&lt;BR /&gt; tx_tcp_seg_failed: 0&lt;BR /&gt; rx_flow_control_xon: 0&lt;BR /&gt; rx_flow_control_xoff: 0&lt;BR /&gt; tx_flow_control_xon: 0&lt;BR /&gt; tx_flow_control_xoff: 0&lt;BR /&gt; rx_long_byte_count: 274617825638&lt;BR /&gt; tx_dma_out_of_sync: 0&lt;BR /&gt; lro_aggregated: 0&lt;BR /&gt; lro_flushed: 0&lt;BR /&gt; lro_recycled: 0&lt;BR /&gt; tx_smbus: 0&lt;BR /&gt; rx_smbus: 0&lt;BR /&gt; dropped_smbus: 0&lt;BR /&gt; os2bmc_rx_by_bmc: 0&lt;BR /&gt; os2bmc_tx_by_bmc: 0&lt;BR /&gt; os2bmc_tx_by_host: 0&lt;BR /&gt; os2bmc_rx_by_host: 0&lt;BR /&gt; rx_errors: 0&lt;BR /&gt; tx_errors: 0&lt;BR /&gt; tx_dropped: 0&lt;BR /&gt; rx_length_errors: 0&lt;BR /&gt; rx_over_errors: 0&lt;BR /&gt; rx_frame_errors: 0&lt;BR /&gt; rx_fifo_errors: 0&lt;BR /&gt; tx_fifo_errors: 0&lt;BR /&gt; tx_heartbeat_errors: 0&lt;BR /&gt; tx_queue_0_packets: 33040613&lt;BR /&gt; tx_queue_0_bytes: 4459729583&lt;BR /&gt; tx_queue_0_restart: 0&lt;BR /&gt; rx_queue_0_packets: 242350310&lt;BR /&gt; rx_queue_0_bytes: 273648424398&lt;BR /&gt; rx_queue_0_drops: 0&lt;BR /&gt; rx_queue_0_csum_err: 0&lt;BR /&gt; rx_queue_0_alloc_failed: 0&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG style="background-color: #ffffff; : ; color: #3d3d3d;"&gt;fw ctl pstat&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;[Expert@fw01:0]# fw ctl pstat&lt;/P&gt;&lt;P&gt;System Capacity Summary:&lt;BR /&gt; Memory used: 13% (3318 MB out of 23989 MB) - below watermark&lt;BR /&gt; Concurrent Connections: 52435 (Unlimited)&lt;BR /&gt; Aggressive Aging is enabled, not active&lt;/P&gt;&lt;P&gt;Hash kernel memory (hmem) statistics:&lt;BR /&gt; Total memory allocated: 2512388096 bytes in 613376 (4096 bytes) blocks using 1 pool&lt;BR /&gt; Total memory bytes used: 0 unused: 2512388096 (100.00%) peak: 1644289672&lt;BR /&gt; Total memory blocks used: 0 unused: 613376 (100%) peak: 425749&lt;BR /&gt; Allocations: 921927929 alloc, 0 failed alloc, 910377276 free&lt;/P&gt;&lt;P&gt;System kernel memory (smem) statistics:&lt;BR /&gt; Total memory bytes used: 4247545836 peak: 4597334808&lt;BR /&gt; Total memory bytes wasted: 89520318&lt;BR /&gt; Blocking memory bytes used: 98583572 peak: 148710952&lt;BR /&gt; Non-Blocking memory bytes used: 4148962264 peak: 4448623856&lt;BR /&gt; Allocations: 38576947 alloc, 0 failed alloc, 38548237 free, 0 failed free&lt;BR /&gt; vmalloc bytes used: 4129415616 expensive: no&lt;/P&gt;&lt;P&gt;Kernel memory (kmem) statistics:&lt;BR /&gt; Total memory bytes used: 2807431148 peak: 3353144992&lt;BR /&gt; Allocations: 960493754 alloc, 0 failed alloc&lt;BR /&gt; 948917252 free, 0 failed free&lt;BR /&gt; External Allocations: 8298240 for packets, 152281056 for SXL&lt;/P&gt;&lt;P&gt;Cookies:&lt;BR /&gt; 2892022307 total, 11 alloc, 11 free,&lt;BR /&gt; 22507831 dup, 3453581577 get, 1905764438 put,&lt;BR /&gt; 2039508532 len, 23406809 cached len, 0 chain alloc,&lt;BR /&gt; 0 chain free&lt;/P&gt;&lt;P&gt;Connections:&lt;BR /&gt; 101983124 total, 63467267 TCP, 24356197 UDP, 14127765 ICMP,&lt;BR /&gt; 31895 other, 273 anticipated, 0 recovered, 52435 concurrent,&lt;BR /&gt; 75786 peak concurrent&lt;/P&gt;&lt;P&gt;Fragments:&lt;BR /&gt; 25905384 fragments, 12940787 packets, 6562 expired, 0 short,&lt;BR /&gt; 0 large, 16 duplicates, 36 failures&lt;/P&gt;&lt;P&gt;NAT:&lt;BR /&gt; 710997013/0 forw, 1020399649/0 bckw, 1713879696 tcpudp,&lt;BR /&gt; 15522032 icmp, 66808162-46398448 alloc&lt;/P&gt;&lt;P&gt;Sync:&lt;BR /&gt; Version: new&lt;BR /&gt; Status: Able to Send/Receive sync packets&lt;BR /&gt; Sync packets sent:&lt;BR /&gt; total : 234541097, retransmitted : 12, retrans reqs : 194, acks : 565604&lt;BR /&gt; Sync packets received:&lt;BR /&gt; total : 3194567, were queued : 330, dropped by net : 267&lt;BR /&gt; retrans reqs : 12, received 109468 acks&lt;BR /&gt; retrans reqs for illegal seq : 0&lt;BR /&gt; dropped updates as a result of sync overload: 193&lt;BR /&gt; Callback statistics: handled 108306 cb, average delay : 1, max delay : 152&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;[Expert@fw02:0]# fw ctl pstat&lt;/P&gt;&lt;P&gt;System Capacity Summary:&lt;BR /&gt; Memory used: 9% (2210 MB out of 23989 MB) - below watermark&lt;BR /&gt; Concurrent Connections: 51740 (Unlimited)&lt;BR /&gt; Aggressive Aging is enabled, not active&lt;/P&gt;&lt;P&gt;Hash kernel memory (hmem) statistics:&lt;BR /&gt; Total memory allocated: 2512388096 bytes in 613376 (4096 bytes) blocks using 1 pool&lt;BR /&gt; Total memory bytes used: 0 unused: 2512388096 (100.00%) peak: 706276448&lt;BR /&gt; Total memory blocks used: 0 unused: 613376 (100%) peak: 180436&lt;BR /&gt; Allocations: 1836486752 alloc, 0 failed alloc, 1832764488 free&lt;/P&gt;&lt;P&gt;System kernel memory (smem) statistics:&lt;BR /&gt; Total memory bytes used: 4019361572 peak: 4322083508&lt;BR /&gt; Total memory bytes wasted: 9222979&lt;BR /&gt; Blocking memory bytes used: 9536488 peak: 11610296&lt;BR /&gt; Non-Blocking memory bytes used: 4009825084 peak: 4310473212&lt;BR /&gt; Allocations: 357809 alloc, 0 failed alloc, 351482 free, 0 failed free&lt;BR /&gt; vmalloc bytes used: 3998226488 expensive: no&lt;/P&gt;&lt;P&gt;Kernel memory (kmem) statistics:&lt;BR /&gt; Total memory bytes used: 1900327696 peak: 2440715436&lt;BR /&gt; Allocations: 1836833507 alloc, 0 failed alloc&lt;BR /&gt; 1833107776 free, 0 failed free&lt;BR /&gt; External Allocations: 3072 for packets, 99247102 for SXL&lt;/P&gt;&lt;P&gt;Cookies:&lt;BR /&gt; 269724906 total, 0 alloc, 0 free,&lt;BR /&gt; 46698 dup, 824924779 get, 2437180 put,&lt;BR /&gt; 272039687 len, 115648 cached len, 0 chain alloc,&lt;BR /&gt; 0 chain free&lt;/P&gt;&lt;P&gt;Connections:&lt;BR /&gt; 814760 total, 25611 TCP, 206335 UDP, 551009 ICMP,&lt;BR /&gt; 31805 other, 0 anticipated, 0 recovered, 51737 concurrent,&lt;BR /&gt; 75691 peak concurrent&lt;/P&gt;&lt;P&gt;Fragments:&lt;BR /&gt; 52898 fragments, 26383 packets, 2 expired, 0 short,&lt;BR /&gt; 0 large, 0 duplicates, 0 failures&lt;/P&gt;&lt;P&gt;NAT:&lt;BR /&gt; 1737046/0 forw, 83816/0 bckw, 1431059 tcpudp,&lt;BR /&gt; 388069 icmp, 621656-46351628 alloc&lt;/P&gt;&lt;P&gt;Sync:&lt;BR /&gt; Version: new&lt;BR /&gt; Status: Able to Send/Receive sync packets&lt;BR /&gt; Sync packets sent:&lt;BR /&gt; total : 3163702, retransmitted : 267, retrans reqs : 12, acks : 109257&lt;BR /&gt; Sync packets received:&lt;BR /&gt; total : 234507506, were queued : 9802, dropped by net : 12&lt;BR /&gt; retrans reqs : 194, received 565578 acks&lt;BR /&gt; retrans reqs for illegal seq : 0&lt;BR /&gt; dropped updates as a result of sync overload: 0&lt;BR /&gt; Callback statistics: handled 560483 cb, average delay : 1, max delay : 7&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;cphaprob syncstat&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;[Expert@fw01:0]# cphaprob syncstat&lt;/P&gt;&lt;P&gt;Sync Statistics (IDs of F&amp;amp;A Peers - 1 &lt;span class="lia-unicode-emoji" title=":disappointed_face:"&gt;😞&lt;/span&gt;&lt;/P&gt;&lt;P&gt;Other Member Updates:&lt;BR /&gt;Sent retransmission requests................... 194&lt;BR /&gt;Avg missing updates per request................ 1&lt;BR /&gt;Old or too-new arriving updates................ 0&lt;BR /&gt;Unsynced missing updates....................... 193&lt;BR /&gt;Lost sync connection (num of events)........... 42&lt;BR /&gt;Timed out sync connection ..................... 0&lt;/P&gt;&lt;P&gt;Local Updates:&lt;BR /&gt;Total generated updates ....................... 33518802&lt;BR /&gt;Recv Retransmission requests................... 12&lt;BR /&gt;Recv Duplicate Retrans request................. 0&lt;/P&gt;&lt;P&gt;Blocking Events................................ 0&lt;BR /&gt;Blocked packets................................ 0&lt;BR /&gt;Max length of sending queue.................... 0&lt;BR /&gt;Avg length of sending queue.................... 0&lt;BR /&gt;Hold Pkts events............................... 108320&lt;BR /&gt;Unhold Pkt events.............................. 108320&lt;BR /&gt;Not held due to no members..................... 47&lt;BR /&gt;Max held duration (sync ticks)................. 0&lt;BR /&gt;Avg held duration (sync ticks)................. 0&lt;/P&gt;&lt;P&gt;Timers:&lt;BR /&gt;Sync tick (ms)................................. 100&lt;BR /&gt;CPHA tick (ms)................................. 100&lt;/P&gt;&lt;P&gt;Queues:&lt;BR /&gt;Sending queue size............................. 512&lt;BR /&gt;Receiving queue size........................... 256&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;[Expert@fw02:0]# cphaprob syncstat&lt;/P&gt;&lt;P&gt;Sync Statistics (IDs of F&amp;amp;A Peers - 1 &lt;span class="lia-unicode-emoji" title=":disappointed_face:"&gt;😞&lt;/span&gt;&lt;/P&gt;&lt;P&gt;Other Member Updates:&lt;BR /&gt;Sent retransmission requests................... 12&lt;BR /&gt;Avg missing updates per request................ 1&lt;BR /&gt;Old or too-new arriving updates................ 12&lt;BR /&gt;Unsynced missing updates....................... 0&lt;BR /&gt;Lost sync connection (num of events)........... 42&lt;BR /&gt;Timed out sync connection ..................... 9&lt;/P&gt;&lt;P&gt;Local Updates:&lt;BR /&gt;Total generated updates ....................... 3385198&lt;BR /&gt;Recv Retransmission requests................... 194&lt;BR /&gt;Recv Duplicate Retrans request................. 0&lt;/P&gt;&lt;P&gt;Blocking Events................................ 0&lt;BR /&gt;Blocked packets................................ 0&lt;BR /&gt;Max length of sending queue.................... 0&lt;BR /&gt;Avg length of sending queue.................... 0&lt;BR /&gt;Hold Pkts events............................... 560569&lt;BR /&gt;Unhold Pkt events.............................. 560569&lt;BR /&gt;Not held due to no members..................... 36&lt;BR /&gt;Max held duration (sync ticks)................. 0&lt;BR /&gt;Avg held duration (sync ticks)................. 0&lt;/P&gt;&lt;P&gt;Timers:&lt;BR /&gt;Sync tick (ms)................................. 100&lt;BR /&gt;CPHA tick (ms)................................. 100&lt;/P&gt;&lt;P&gt;Queues:&lt;BR /&gt;Sending queue size............................. 512&lt;BR /&gt;Receiving queue size........................... 256&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 28 Nov 2018 14:19:30 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/Issue-on-the-sync-interface/m-p/30657#M6373</guid>
      <dc:creator>MATEUS_SALGADO</dc:creator>
      <dc:date>2018-11-28T14:19:30Z</dc:date>
    </item>
    <item>
      <title>Re: Issue on the sync interface</title>
      <link>https://community.checkpoint.com/t5/General-Topics/Issue-on-the-sync-interface/m-p/30658#M6374</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;The eth8 interfaces at a hardware/Gaia level look healthy and the sync network does not appear to be overloaded.&amp;nbsp; One thing that was a little odd on fw01:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #333333; background-color: #ffffff;"&gt;tx_timeout_count: 1&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #333333; background-color: #ffffff;"&gt;Don't think I've ever seen this counter be nonzero before, it indicates that a transmit operation timed out because an interrupt was lost or the NIC card lost its mind.&amp;nbsp; I'm assuming that you have experienced many sync interface "failures" and this counter does not increment every time they happen, so it is probably nothing.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #333333; background-color: #ffffff;"&gt;Since the underlying NIC is stable and not overloaded (and you already tried switching ports from Sync to eth8) it sounds like some kind of Check Point code issue.&amp;nbsp; The log entry is indicating that the member declaring the failure has not heard anything from the other member via the sync interface for a full 3 seconds, which is a relative eternity in terms of sync updates which are supposed to happen 10-20 times a second.&amp;nbsp; A few questions:&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;SPAN style="color: #333333; background-color: #ffffff;"&gt;Are both cluster members declaring the eth8 interface down at roughly the same time, or is only the active member doing it?&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;SPAN style="color: #333333; background-color: #ffffff;"&gt;Assuming that fw01 is initially active, if you fail over to fw02 and run there for awhile, does the eth8 failure now happen on fw02 and not fw01?&amp;nbsp; Or does the eth8 failure stick with fw01 (or stop completely) when running with fw02 active?&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;SPAN style="color: #333333; background-color: #ffffff;"&gt;Is the value "Lost sync connection (num of events)........... 42" reported by &lt;STRONG&gt;cphaprob syncstat&lt;/STRONG&gt; incrementing&amp;nbsp; on its own?&amp;nbsp; It is expected that this counter will bump up a few every time policy is reinstalled as sync has to restart, but should not increment on its own without the policy being loaded.&amp;nbsp; Does it increment every time eth8 is declared dead?&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;SPAN style="color: #333333; background-color: #ffffff;"&gt;Are any or lots of process core dumps getting barfed into the &lt;STRONG&gt;/var/log/dump/usermode&lt;/STRONG&gt; directory every time eth8 is declared dead?&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;SPAN style="color: #333333; background-color: #ffffff;"&gt;Any establishment or loss of cluster state sync should be logged in &lt;STRONG&gt;$FWDIR/log/fwd.elg&lt;/STRONG&gt;.&amp;nbsp; Does this file have anything interesting added to it around the time of the issue?&amp;nbsp; Any chance the fwd daemon is crashing when sync is lost?&amp;nbsp; (Use &lt;STRONG&gt;cpwd_admin list&lt;/STRONG&gt; to check this&lt;STRONG&gt;)&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;SPAN style="color: #333333; background-color: #ffffff;"&gt;Is eth8 configured as "1st Sync" or "Cluster + 1st Sync" on the cluster object?&lt;BR /&gt;&lt;/SPAN&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&lt;SPAN style="color: #333333; background-color: #ffffff;"&gt;--&lt;BR /&gt; Second Edition of my "Max Power" Firewall Book&lt;BR /&gt; Now Available at &lt;A href="http://www.maxpowerfirewalls.com" target="_blank"&gt;http://www.maxpowerfirewalls.com&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 29 Nov 2018 03:06:16 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/Issue-on-the-sync-interface/m-p/30658#M6374</guid>
      <dc:creator>Timothy_Hall</dc:creator>
      <dc:date>2018-11-29T03:06:16Z</dc:date>
    </item>
    <item>
      <title>Re: Issue on the sync interface</title>
      <link>https://community.checkpoint.com/t5/General-Topics/Issue-on-the-sync-interface/m-p/30659#M6375</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;&lt;A href="https://community.checkpoint.com/migrated-users/41625"&gt;Timothy Hall&lt;/A&gt;‌, follow the answer about your questions (sorry about the delay)&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Are both cluster members declaring the eth8 interface down at roughly the same time, or is only the active member doing it?&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;A:&lt;/STRONG&gt; Actually no, the message of "eth8 interface is down" appears only on the standby member. For the active member, who "is down" was the member in standby(I'll put the screenshot of both)&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Active&lt;/P&gt;&lt;P&gt;&lt;IMG alt="" class="image-1 jive-image j-img-original" src="https://community.checkpoint.com/legacyfs/online/checkpoint/76098_fw01.PNG" /&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Standby&lt;/P&gt;&lt;P&gt;&lt;IMG alt="" class="image-2 jive-image j-img-original" src="https://community.checkpoint.com/legacyfs/online/checkpoint/76099_fw02.PNG" /&gt;&lt;BR /&gt;Note: Some messages about fwmultik_dispatch started to appear after we enabled priority queue (As requested by the TAC)&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;STRONG&gt;Assuming that fw01 is initially active, if you fail over to fw02 and run there for awhile, does the eth8 failure now happen on fw02 and not fw01?&lt;/STRONG&gt;&lt;STRONG&gt;Or does the eth8 failure stick with fw01 (or stop completely) when running with fw02 active?&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;A&lt;/STRONG&gt;: The same problem still happens, only the messages are reversed.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Is the value "Lost sync connection (num of events)........... 42" reported by cphaprob syncstat incrementing on its own? It is expected that this counterwill bump up a few every time policy is reinstalled as sync has to restart, but should not increment on its own without the policy being loaded.Does it increment every time eth8 is declared dead?&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;A:&amp;nbsp;&lt;/STRONG&gt;Sorry Tim, but I cannot answer this question, because I don't have control about the installation policy. My customer apply policy sometimes without consult us.&lt;BR /&gt;(But today, I can see the value of "Lost sync connection" is 60)&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Are any or lots of process core dumps getting barfed into the /var/log/dump/usermode directory every time eth8 is declared dead?&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;A:&lt;/STRONG&gt; No core dumps in both members.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Any establishment or loss of cluster state sync should be logged in $FWDIR/log/fwd.elg. Does this file have anything interesting added to it around the time of the issue? Any chance the fwd daemon is crashing when sync is lost? (Use cpwd_admin list to check this)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;A:&lt;/STRONG&gt; In fwd.elg I don't see any logs on the same hour of the flapping (16:00 to 16:59)&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Is eth8 configured as "1st Sync" or "Cluster + 1st Sync" on the cluster object?&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;A:&lt;/STRONG&gt; Just 1st Sync&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 30 Nov 2018 18:02:52 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/Issue-on-the-sync-interface/m-p/30659#M6375</guid>
      <dc:creator>MATEUS_SALGADO</dc:creator>
      <dc:date>2018-11-30T18:02:52Z</dc:date>
    </item>
  </channel>
</rss>

