<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Cluster failover when enabling SNMP in Firewall and Security Management</title>
    <link>https://community.checkpoint.com/t5/Firewall-and-Security-Management/Cluster-failover-when-enabling-SNMP/m-p/115133#M16151</link>
    <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;I am facing a strange issue when I try to enable SNMP v3 read-only on my cluster gateways. I have two 6600 gateways in a cluster running R81. I have tried twice to enable SNMP ver3 over the Gaia portal, and both times when I press aply I got disconnected from the Gaia portal without any changes made. Investigating further, I noticed that both times I got cluster failover with the following status:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;CP-2&amp;gt; cphaprob state&lt;/P&gt;&lt;P&gt;Cluster Mode: High Availability (Active Up) with IGMP Membership&lt;/P&gt;&lt;P&gt;ID Unique Address Assigned Load State Name&lt;/P&gt;&lt;P&gt;1 10.xxx.xxx.6 0% STANDBY CP-1&lt;BR /&gt;2 (local) 10.xxx.xxx.7 100% ACTIVE CP-2&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;Active PNOTEs: None&lt;/P&gt;&lt;P&gt;Last member state change event:&lt;BR /&gt;Event Code: CLUS-114704&lt;BR /&gt;State change: STANDBY -&amp;gt; ACTIVE&lt;BR /&gt;Reason for state change: No other ACTIVE members have been found in the cl uster&lt;BR /&gt;Event time: Thu Apr 1 22:33:50 2021&lt;/P&gt;&lt;P&gt;Last cluster failover event:&lt;BR /&gt;Transition to new ACTIVE: Member 1 -&amp;gt; Member 2&lt;BR /&gt;Reason: ROUTED PNOTE&lt;BR /&gt;Event time: Thu Apr 1 22:33:50 2021&lt;/P&gt;&lt;P&gt;Cluster failover count:&lt;BR /&gt;Failover counter: 3&lt;BR /&gt;Time of counter reset: Fri Mar 26 10:54:34 2021 (reboot)&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;So I got this Routed Pnote message and cluster failover every time I try to enable SNMP v3. I have OSPF enabled cluster and OSPF peering to Cisco Nexus switches and Cisco 4431 routers. Maybe someone can give me some advice, how to solve this issue?&lt;/P&gt;</description>
    <pubDate>Thu, 01 Apr 2021 21:28:07 GMT</pubDate>
    <dc:creator>MladenAntesevic</dc:creator>
    <dc:date>2021-04-01T21:28:07Z</dc:date>
    <item>
      <title>Cluster failover when enabling SNMP</title>
      <link>https://community.checkpoint.com/t5/Firewall-and-Security-Management/Cluster-failover-when-enabling-SNMP/m-p/115133#M16151</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;I am facing a strange issue when I try to enable SNMP v3 read-only on my cluster gateways. I have two 6600 gateways in a cluster running R81. I have tried twice to enable SNMP ver3 over the Gaia portal, and both times when I press aply I got disconnected from the Gaia portal without any changes made. Investigating further, I noticed that both times I got cluster failover with the following status:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;CP-2&amp;gt; cphaprob state&lt;/P&gt;&lt;P&gt;Cluster Mode: High Availability (Active Up) with IGMP Membership&lt;/P&gt;&lt;P&gt;ID Unique Address Assigned Load State Name&lt;/P&gt;&lt;P&gt;1 10.xxx.xxx.6 0% STANDBY CP-1&lt;BR /&gt;2 (local) 10.xxx.xxx.7 100% ACTIVE CP-2&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;Active PNOTEs: None&lt;/P&gt;&lt;P&gt;Last member state change event:&lt;BR /&gt;Event Code: CLUS-114704&lt;BR /&gt;State change: STANDBY -&amp;gt; ACTIVE&lt;BR /&gt;Reason for state change: No other ACTIVE members have been found in the cl uster&lt;BR /&gt;Event time: Thu Apr 1 22:33:50 2021&lt;/P&gt;&lt;P&gt;Last cluster failover event:&lt;BR /&gt;Transition to new ACTIVE: Member 1 -&amp;gt; Member 2&lt;BR /&gt;Reason: ROUTED PNOTE&lt;BR /&gt;Event time: Thu Apr 1 22:33:50 2021&lt;/P&gt;&lt;P&gt;Cluster failover count:&lt;BR /&gt;Failover counter: 3&lt;BR /&gt;Time of counter reset: Fri Mar 26 10:54:34 2021 (reboot)&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;So I got this Routed Pnote message and cluster failover every time I try to enable SNMP v3. I have OSPF enabled cluster and OSPF peering to Cisco Nexus switches and Cisco 4431 routers. Maybe someone can give me some advice, how to solve this issue?&lt;/P&gt;</description>
      <pubDate>Thu, 01 Apr 2021 21:28:07 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Firewall-and-Security-Management/Cluster-failover-when-enabling-SNMP/m-p/115133#M16151</guid>
      <dc:creator>MladenAntesevic</dc:creator>
      <dc:date>2021-04-01T21:28:07Z</dc:date>
    </item>
    <item>
      <title>Re: Cluster failover when enabling SNMP</title>
      <link>https://community.checkpoint.com/t5/Firewall-and-Security-Management/Cluster-failover-when-enabling-SNMP/m-p/115167#M16163</link>
      <description>&lt;P&gt;What can you see in the&amp;nbsp;&lt;SPAN&gt;/var/log/routed.log.* messages ? Anything in particular stands out ?&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Sat, 03 Apr 2021 13:50:13 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Firewall-and-Security-Management/Cluster-failover-when-enabling-SNMP/m-p/115167#M16163</guid>
      <dc:creator>funkylicious</dc:creator>
      <dc:date>2021-04-03T13:50:13Z</dc:date>
    </item>
    <item>
      <title>Re: Cluster failover when enabling SNMP</title>
      <link>https://community.checkpoint.com/t5/Firewall-and-Security-Management/Cluster-failover-when-enabling-SNMP/m-p/115171#M16165</link>
      <description>&lt;P&gt;I see that routed daemon on a master has restarted when I tried to enabled SNMP. Here are the logs from my both cluster members, I tried twice, first at 15:55 when failover to second cluster member occurred and than latter on at 22:33 when cluster master failed back to the primary:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Apr 1 15:55:33.550102 task_cmd_terminate(194): command subsystem terminated.&lt;BR /&gt;Apr 1 15:55:33.550207&lt;BR /&gt;Apr 1 15:55:33.550207 Exit routed[28500] version routed-10.17.2020-15:36:15&lt;BR /&gt;Apr 1 15:55:33.550207&lt;BR /&gt;Apr 1 15:55:34 trace_on: Tracing to "/var/log/routed.log" started&lt;BR /&gt;Apr 1 15:55:34 trace_on: Version routed-10.17.2020-15:36:15 (ice_main 995000083)&lt;BR /&gt;Apr 1 16:20:54.702109 cpcl_cxl_get_memberip_from_id(837): Sync IP is: xxx.xxx.xxx.7&lt;BR /&gt;Apr 1 16:24:00.166579 cpcl_cxl_get_memberip_from_id(837): Sync IP is: xxx.xxx.xxx.7&lt;BR /&gt;Apr 1 22:33:48.962633 recv(header) returns 0&lt;BR /&gt;Apr 1 22:33:48.962633 peer_remove(130): Entering !!!!&lt;BR /&gt;Apr 1 22:33:50.087259 mvc_check_for_cu: upgrade finished&lt;BR /&gt;Apr 1 22:33:50.087259 cpcl_master_init(6196): entering&lt;BR /&gt;Apr 1 22:33:50.087259 entering cpcl_master_init()&lt;BR /&gt;Apr 1 22:33:50.087259 cpcl_master_init(6254): sockpath is /tmp/sockvrf0&lt;BR /&gt;Apr 1 22:33:50.087259 leaving cpcl_master_init()&lt;BR /&gt;Apr 1 22:33:50.087259 cpcl_master_init(6330): leaving&lt;BR /&gt;Apr 1 22:33:50.087259 CLUSTER: Proto 7 enables sending in cluster&lt;BR /&gt;Apr 1 22:33:52.105413 cpcl_vrf_master_listen_accept(6098): entering cpcl_vrf_master_listen_accept&lt;BR /&gt;Apr 1 22:33:52.105413 cpcl_vrf_master_listen_accept(6170): leaving cpcl_vrf_master_listen_accept&lt;BR /&gt;Apr 1 22:33:52.105464 cpcl_vrf_recv_from_instance_manager(5918): instance 0 entering cpcl_vrf_recv_from_instance_manager&lt;BR /&gt;Apr 1 22:33:52.105464 cpcl_vrf_recv_from_instance_manager(5949): instance 0 recv returned 4&lt;BR /&gt;Apr 1 22:33:52.105464 cpcl_vrf_recv_from_instance_manager(5975): instance 0 received fd 36&lt;BR /&gt;Apr 1 22:33:52.105464 cpcl_vrf_recv_from_instance_manager(6065): Deleting CPCL_IM_Peer_Task !!!!&lt;BR /&gt;Apr 1 22:33:52.105464 cpcl_vrf_recv_from_instance_manager(6071): instance 0 leaving cpcl_vrf_recv_from_instance_manager&lt;BR /&gt;Apr 1 22:33:53.125719 cpcl_vrf_master_send_vrf_finish(5798): instance 0 entering cpcl_vrf_master_send_vrf_finish&lt;BR /&gt;Apr 1 22:33:53.125719 cpcl_vrf_master_send_vrf_finish(5829): instance id 0 sending CLUSTER_INITIAL_VRF_SCM_XFER_DONE to peer 1&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Apr 1 15:55:32.870143 recv(header) returns 0&lt;BR /&gt;Apr 1 15:55:32.870143 peer_remove(130): Entering !!!!&lt;BR /&gt;Apr 1 15:55:34.021994 cpcl_master_init(6196): entering&lt;BR /&gt;Apr 1 15:55:34.021994 entering cpcl_master_init()&lt;BR /&gt;Apr 1 15:55:34.021994 cpcl_master_init(6254): sockpath is /tmp/sockvrf0&lt;BR /&gt;Apr 1 15:55:34.021994 leaving cpcl_master_init()&lt;BR /&gt;Apr 1 15:55:34.021994 cpcl_master_init(6330): leaving&lt;BR /&gt;Apr 1 15:55:34.021994 CLUSTER: Proto 7 enables sending in cluster&lt;BR /&gt;Apr 1 15:55:36.040136 cpcl_vrf_master_listen_accept(6098): entering cpcl_vrf_master_listen_accept&lt;BR /&gt;Apr 1 15:55:36.040136 cpcl_vrf_master_listen_accept(6170): leaving cpcl_vrf_master_listen_accept&lt;BR /&gt;Apr 1 15:55:36.040188 cpcl_vrf_recv_from_instance_manager(5918): instance 0 entering cpcl_vrf_recv_from_instance_manager&lt;BR /&gt;Apr 1 15:55:36.040188 cpcl_vrf_recv_from_instance_manager(5949): instance 0 recv returned 4&lt;BR /&gt;Apr 1 15:55:36.040188 cpcl_vrf_recv_from_instance_manager(5975): instance 0 received fd 36&lt;BR /&gt;Apr 1 15:55:36.040188 cpcl_vrf_recv_from_instance_manager(6065): Deleting CPCL_IM_Peer_Task !!!!&lt;BR /&gt;Apr 1 15:55:36.040188 cpcl_vrf_recv_from_instance_manager(6071): instance 0 leaving cpcl_vrf_recv_from_instance_manager&lt;BR /&gt;Apr 1 15:55:37.050531 cpcl_vrf_master_send_vrf_finish(5798): instance 0 entering cpcl_vrf_master_send_vrf_finish&lt;BR /&gt;Apr 1 15:55:37.050531 cpcl_vrf_master_send_vrf_finish(5829): instance id 0 sending CLUSTER_INITIAL_VRF_SCM_XFER_DONE to peer 2&lt;BR /&gt;Apr 1 22:33:48.258116 task_cmd_terminate(194): command subsystem terminated.&lt;BR /&gt;Apr 1 22:33:48.258177&lt;BR /&gt;Apr 1 22:33:48.258177 Exit routed[28553] version routed-10.17.2020-15:36:15&lt;BR /&gt;Apr 1 22:33:48.258177&lt;BR /&gt;Apr 1 22:33:49 trace_on: Tracing to "/var/log/routed.log" started&lt;BR /&gt;Apr 1 22:33:49 trace_on: Version routed-10.17.2020-15:36:15 (ice_main 995000083)&lt;/P&gt;</description>
      <pubDate>Sat, 03 Apr 2021 20:56:37 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Firewall-and-Security-Management/Cluster-failover-when-enabling-SNMP/m-p/115171#M16165</guid>
      <dc:creator>MladenAntesevic</dc:creator>
      <dc:date>2021-04-03T20:56:37Z</dc:date>
    </item>
    <item>
      <title>Re: Cluster failover when enabling SNMP</title>
      <link>https://community.checkpoint.com/t5/Firewall-and-Security-Management/Cluster-failover-when-enabling-SNMP/m-p/115183#M16168</link>
      <description>&lt;P&gt;I saw similar port for this few weeks ago, but cant recall now what the outcome was, apologies. You may wish to contact TAC and open case for this. Personally, I find that to be very unexpected behaviour.&lt;/P&gt;</description>
      <pubDate>Sun, 04 Apr 2021 01:43:57 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Firewall-and-Security-Management/Cluster-failover-when-enabling-SNMP/m-p/115183#M16168</guid>
      <dc:creator>the_rock</dc:creator>
      <dc:date>2021-04-04T01:43:57Z</dc:date>
    </item>
    <item>
      <title>Re: Cluster failover when enabling SNMP</title>
      <link>https://community.checkpoint.com/t5/Firewall-and-Security-Management/Cluster-failover-when-enabling-SNMP/m-p/115219#M16185</link>
      <description>&lt;P&gt;feels like a bug.&lt;/P&gt;
&lt;P&gt;Is the installation running JHFA23?&lt;/P&gt;
&lt;P&gt;Is ccp being encrypted? Seen weird issues when this is on (pre-R81)&lt;/P&gt;
&lt;P&gt;Has this same procedure been attempted using clish rather then WEBUI?&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sun, 04 Apr 2021 19:28:20 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Firewall-and-Security-Management/Cluster-failover-when-enabling-SNMP/m-p/115219#M16185</guid>
      <dc:creator>genisis__</dc:creator>
      <dc:date>2021-04-04T19:28:20Z</dc:date>
    </item>
    <item>
      <title>Re: Cluster failover when enabling SNMP</title>
      <link>https://community.checkpoint.com/t5/Firewall-and-Security-Management/Cluster-failover-when-enabling-SNMP/m-p/115223#M16189</link>
      <description>&lt;P&gt;No, I am not running JHFA23, I will check the release notes if there is something similar to my case. CCP are not being encryped, I left them as default, unicast and unencrypted.&lt;/P&gt;&lt;P&gt;I will try to do the same thing using clish.&lt;/P&gt;</description>
      <pubDate>Sun, 04 Apr 2021 20:27:21 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Firewall-and-Security-Management/Cluster-failover-when-enabling-SNMP/m-p/115223#M16189</guid>
      <dc:creator>MladenAntesevic</dc:creator>
      <dc:date>2021-04-04T20:27:21Z</dc:date>
    </item>
  </channel>
</rss>

