<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: maestro vsx pnotes in Hyperscale Firewall (Maestro)</title>
    <link>https://community.checkpoint.com/t5/Hyperscale-Firewall-Maestro/maestro-vsx-pnotes/m-p/205211#M2417</link>
    <description>&lt;P&gt;I would recommend opening a TAC case for further troubleshooting and finding a solution.&lt;/P&gt;</description>
    <pubDate>Tue, 06 Feb 2024 19:06:44 GMT</pubDate>
    <dc:creator>Lari_Luoma</dc:creator>
    <dc:date>2024-02-06T19:06:44Z</dc:date>
    <item>
      <title>maestro vsx pnotes</title>
      <link>https://community.checkpoint.com/t5/Hyperscale-Firewall-Maestro/maestro-vsx-pnotes/m-p/205135#M2416</link>
      <description>&lt;P&gt;Hi experts,&lt;/P&gt;&lt;P&gt;we have the following configuration and experiencing policy installation failure consistently.&amp;nbsp; please advise how to proceed with troubleshooting and/or solution.&amp;nbsp; thank you&lt;/P&gt;&lt;P&gt;dual orchestrator mho-140 with 4 sgm.&amp;nbsp; configured with 5 virtual systems.&amp;nbsp; r81.20 jumbo 26&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;#/var/log/messages:&lt;/P&gt;&lt;P&gt;Feb 6 04:18:32 2024 mho-ch01-01 spike_detective: spike info: type: thread, thread id: 55092, thread name: fw_full, start time: 06/02/24 01:18:25, spike duration (sec): 6, initial cpu usage: 99, average cpu usage: 99, perf taken: 0&lt;/P&gt;&lt;P&gt;Feb 6 04:18:38 2024 mho-ch01-01 spike_detective: spike info: type: cpu, cpu core: 12, top consumer: fw_full, start time: 06/02/24 01:18:25, spike duration (sec): 12, initial cpu usage: 99, average cpu usage: 97, perf taken: 1&lt;/P&gt;&lt;P&gt;Feb 6 04:18:38 2024 mho-ch01-01 fwk: CLUS-111500-1_01: State change: ACTIVE -&amp;gt; DOWN | Reason: VSX PNOTE due to problem in Virtual System 5&lt;/P&gt;&lt;P&gt;Feb 6 04:18:38 2024 mho-ch01-01 fwk: CLUS-115700-1_01: State change: ACTIVE -&amp;gt; DOWN | Reason: Member state has been changed due to issue in Virtual System 5&lt;/P&gt;&lt;P&gt;Feb 6 04:18:38 2024 mho-ch01-01 fwk: CLUS-115700-1_01: State change: ACTIVE -&amp;gt; DOWN | Reason: Member state has been changed due to issue in Virtual System 5&lt;/P&gt;&lt;P&gt;Feb 6 04:18:38 2024 mho-ch01-01 kernel:[fw4_0];fwkdrv_process_vsx_global_data_in_kernel: Updating smo task to 1&lt;BR /&gt;Feb 6 04:18:38 2024 mho-ch01-01 kernel:ssm_com_should_update_switch: update is 1 due to change in present_members: (old) &amp;lt;=&amp;gt; (new)&lt;BR /&gt;Feb 6 04:18:38 2024 mho-ch01-01 fwk: CLUS-115700-1_01: State change: ACTIVE -&amp;gt; DOWN | Reason: Member state has been changed due to issue in Virtual System 5&lt;/P&gt;&lt;P&gt;Feb 6 04:18:38 2024 mho-ch01-01 fwk: CLUS-115700-1_01: State change: ACTIVE -&amp;gt; DOWN | Reason: Member state has been changed due to issue in Virtual System 5&lt;/P&gt;&lt;P&gt;Feb 6 04:18:38 2024 mho-ch01-01 lbd: lbd_trap_handler: cxl_is_blade_for_task(LOAD_BALANCE) = 0&lt;BR /&gt;Feb 6 04:18:38 2024 mho-ch01-01 routed[84478]: [routed] ERROR: cpcl_recv: Failed to receive cluster message header, connection will need to be reestablished. (message size is 0)&lt;BR /&gt;Feb 6 04:18:38 2024 mho-ch01-01 routed[84478]: [routed] ERROR: cpcl_recv: deleting peer task 0xac1f514 due to failure to read from the socket&lt;BR /&gt;Feb 6 04:18:38 2024 mho-ch01-01 routed[84478]: [routed] ERROR: cpcl_recv: Failed to receive cluster message header, connection will need to be reestablished. (message size is 0)&lt;BR /&gt;Feb 6 04:18:38 2024 mho-ch01-01 routed[84478]: [routed] ERROR: cpcl_recv: deleting peer task 0xabeeb74 due to failure to read from the socket&lt;BR /&gt;Feb 6 04:18:38 2024 mho-ch01-01 routed[84478]: [routed] ERROR: cpcl_recv: Failed to receive cluster message header, connection will need to be reestablished. (message size is 0)&lt;BR /&gt;Feb 6 04:18:38 2024 mho-ch01-01 routed[84478]: [routed] ERROR: cpcl_recv: deleting peer task 0xac1f514 due to failure to read from the socket&lt;BR /&gt;Feb 6 04:18:38 2024 mho-ch01-01 routed[84478]: [routed] ERROR: cpcl_recv: Failed to receive cluster message header, connection will need to be reestablished. (message size is 0)&lt;BR /&gt;Feb 6 04:18:38 2024 mho-ch01-01 routed[84478]: [routed] ERROR: cpcl_recv: deleting peer task 0xabeeb74 due to failure to read from the socket&lt;BR /&gt;Feb 6 04:18:39 2024 mho-ch01-01 fwk: CLUS-115700-1_01: State change: ACTIVE -&amp;gt; DOWN | Reason: Member state has been changed due to issue in Virtual System 5&lt;/P&gt;&lt;P&gt;Feb 6 04:18:39 2024 mho-ch01-01 fwk: CLUS-115700-1_01: State remains: DOWN | Reason: Previous problem resolved, Member state has been changed due to issue in Virtual System 0&lt;/P&gt;&lt;P&gt;Feb 6 04:18:39 2024 mho-ch01-01 fwk: CLUS-115700-1_01: State remains: DOWN | Reason: Previous problem resolved, Member state has been changed due to issue in Virtual System 5&lt;/P&gt;&lt;P&gt;Feb 6 04:18:39 2024 mho-ch01-01 routed[84478]: [routed] ERROR: cpcl_recv: Failed to receive cluster message header, connection will need to be reestablished. (message size is 0)&lt;BR /&gt;Feb 6 04:18:39 2024 mho-ch01-01 routed[84478]: [routed] ERROR: cpcl_recv: deleting peer task 0xabeeb74 due to failure to read from the socket&lt;BR /&gt;Feb 6 04:18:39 2024 mho-ch01-01 routed[84478]: [routed] ERROR: cpcl_recv: Failed to receive cluster message header, connection will need to be reestablished. errno = 104 (Connection reset by peer)&lt;BR /&gt;Feb 6 04:18:39 2024 mho-ch01-01 routed[84478]: [routed] ERROR: cpcl_recv: deleting peer task 0xabeec7c due to failure to read from the socket&lt;BR /&gt;Feb 6 04:18:39 2024 mho-ch01-01 routed[84478]: [routed] ERROR: cpcl_recv: Failed to receive cluster message header, connection will need to be reestablished. errno = 104 (Connection reset by peer)&lt;BR /&gt;Feb 6 04:18:39 2024 mho-ch01-01 routed[84478]: [routed] ERROR: cpcl_recv: deleting peer task 0xac38a24 due to failure to read from the socket&lt;BR /&gt;Feb 6 04:18:39 2024 mho-ch01-01 fwk: CLUS-111700-1_01: State remains: DOWN | Reason: Previous problem resolved, ROUTED PNOTE&lt;/P&gt;&lt;P&gt;Feb 6 04:18:39 2024 mho-ch01-01 fwk: CLUS-111700-1_01: State remains: DOWN | Reason: Previous problem resolved, ROUTED PNOTE&lt;/P&gt;&lt;P&gt;Feb 6 04:18:39 2024 mho-ch01-01 fwk: CLUS-111700-1_01: State remains: DOWN | Reason: Previous problem resolved, ROUTED PNOTE&lt;/P&gt;&lt;P&gt;Feb 6 04:18:39 2024 mho-ch01-01 fwk: CLUS-115700-1_01: State remains: DOWN | Reason: Previous problem resolved, Member state has been changed due to issue in Virtual System 1&lt;/P&gt;&lt;P&gt;Feb 6 04:18:39 2024 mho-ch01-01 sgm_pmd: SGM has left monitoring task&lt;BR /&gt;Feb 6 04:18:40 2024 mho-ch01-01 kernel:[fw4_0];fwkdrv_process_vsx_global_data_in_kernel: Updating smo task to 2&lt;BR /&gt;Feb 6 04:18:40 2024 mho-ch01-01 kernel:ssm_com_should_update_switch: update is 1 due to change in present_members: (old) &amp;lt;=&amp;gt; (new)&lt;BR /&gt;Feb 6 04:18:40 2024 mho-ch01-01 lbd: lbd_trap_handler: cxl_is_blade_for_task(LOAD_BALANCE) = 0&lt;BR /&gt;Feb 6 04:18:40 2024 mho-ch01-01 fwk: CLUS-115700-1_01: State remains: DOWN | Reason: Previous problem resolved, Member state has been changed due to issue in Virtual System 0&lt;/P&gt;&lt;P&gt;Feb 6 04:18:42 2024 mho-ch01-01 kernel:ssm_com_should_update_switch: update is 1 due to change in present_members: (old) &amp;lt;=&amp;gt; (new)&lt;BR /&gt;Feb 6 04:18:42 2024 mho-ch01-01 lbd: lbd_trap_handler: cxl_is_blade_for_task(LOAD_BALANCE) = 0&lt;BR /&gt;Feb 6 04:18:42 2024 mho-ch01-01 fwk: CLUS-111700-1_01: State remains: DOWN | Reason: Previous problem resolved, ROUTED PNOTE&lt;/P&gt;&lt;P&gt;Feb 6 04:18:45 2024 mho-ch01-01 fw: SMOApiExecHaDoHookCmd: Creating /opt/CPsuite-R81.20/fw1/CTX/CTX00005/conf/fw_updated&lt;BR /&gt;Feb 6 04:18:46 2024 mho-ch01-01 fwk: CLUS-115700-1_01: State remains: DOWN | Reason: Previous problem resolved, Member state has been changed due to issue in Virtual System 0&lt;/P&gt;&lt;P&gt;Feb 6 04:18:47 2024 mho-ch01-01 fwk: CLUS-115700-1_01: State remains: DOWN | Reason: Previous problem resolved, Member state has been changed due to issue in Virtual System 1&lt;/P&gt;&lt;P&gt;Feb 6 04:18:47 2024 mho-ch01-01 fwk: CLUS-115700-1_01: State remains: DOWN | Reason: Previous problem resolved, Member state has been changed due to issue in Virtual System 0&lt;/P&gt;&lt;P&gt;Feb 6 04:18:47 2024 mho-ch01-01 fwk: CLUS-115700-1_01: State remains: DOWN | Reason: Previous problem resolved, Member state has been changed due to issue in Virtual System 3&lt;/P&gt;&lt;P&gt;Feb 6 04:18:47 2024 mho-ch01-01 fwk: CLUS-115700-1_01: State remains: DOWN | Reason: Previous problem resolved, Member state has been changed due to issue in Virtual System 3&lt;/P&gt;&lt;P&gt;Feb 6 04:18:48 2024 mho-ch01-01 fwk: CLUS-115700-1_01: State remains: DOWN | Reason: Previous problem resolved, Member state has been changed due to issue in Virtual System 3&lt;/P&gt;&lt;P&gt;Feb 6 04:18:52 2024 mho-ch01-01 align_the_local_LACP_bonds_to_chassis_monitor_bonds.py: Aligning the local GW LACP bonds to chassis monitor GW LACP bonds&lt;BR /&gt;Feb 6 04:18:53 2024 mho-ch01-01 align_the_local_LACP_bonds_to_chassis_monitor_bonds.py: Skip aligning VS 0 bond4 slaves port numbers since the chassis monitor GW bond slaves and the local GW bond slaves port numbers are identical. Chassis Monitor: VS 0 bond4 First slave System MAC address 00:1c:7f:81:08:01 slaves {'eth2-08': '2', 'eth1-08': '1'}, local GW: VS 0 bond4 First slave System MAC address 00:1c:7f:81:08:01 slaves {'eth2-08': '2', 'eth1-08': '1'}&lt;BR /&gt;Feb 6 04:18:53 2024 mho-ch01-01 align_the_local_LACP_bonds_to_chassis_monitor_bonds.py: Skip aligning VS 0 bond5 slaves port numbers since the chassis monitor GW bond slaves and the local GW bond slaves port numbers are identical. Chassis Monitor: VS 0 bond5 First slave System MAC address 00:1c:7f:81:09:01 slaves {'eth2-09': '2', 'eth1-09': '1'}, local GW: VS 0 bond5 First slave System MAC address 00:1c:7f:81:09:01 slaves {'eth2-09': '2', 'eth1-09': '1'}&lt;BR /&gt;Feb 6 04:18:53 2024 mho-ch01-01 align_the_local_LACP_bonds_to_chassis_monitor_bonds.py: Skip aligning VS 0 bond6 slaves port numbers since the chassis monitor GW bond slaves and the local GW bond slaves port numbers are identical. Chassis Monitor: VS 0 bond6 First slave System MAC address 00:1c:7f:81:0a:01 slaves {'eth2-10': '2', 'eth1-10': '1'}, local GW: VS 0 bond6 First slave System MAC address 00:1c:7f:81:0a:01 slaves {'eth2-10': '2', 'eth1-10': '1'}&lt;BR /&gt;Feb 6 04:18:53 2024 mho-ch01-01 align_the_local_LACP_bonds_to_chassis_monitor_bonds.py: Skip aligning VS 0 bond7 slaves port numbers since the chassis monitor GW bond slaves and the local GW bond slaves port numbers are identical. Chassis Monitor: VS 0 bond7 First slave System MAC address 00:1c:7f:81:0c:01 slaves {'eth2-12': '2', 'eth1-12': '1'}, local GW: VS 0 bond7 First slave System MAC address 00:1c:7f:81:0c:01 slaves {'eth2-12': '2', 'eth1-12': '1'}&lt;BR /&gt;Feb 6 04:18:53 2024 mho-ch01-01 align_the_local_LACP_bonds_to_chassis_monitor_bonds.py: Skip aligning VS 0 bond1 slaves port numbers since the chassis monitor GW bond slaves and the local GW bond slaves port numbers are identical. Chassis Monitor: VS 0 bond1 First slave System MAC address 00:1c:7f:81:05:01 slaves {'eth1-05': '1', 'eth2-05': '2'}, local GW: VS 0 bond1 First slave System MAC address 00:1c:7f:81:05:01 slaves {'eth1-05': '1', 'eth2-05': '2'}&lt;BR /&gt;Feb 6 04:18:53 2024 mho-ch01-01 align_the_local_LACP_bonds_to_chassis_monitor_bonds.py: Skip aligning VS 0 bond2 slaves port numbers since the chassis monitor GW bond slaves and the local GW bond slaves port numbers are identical. Chassis Monitor: VS 0 bond2 First slave System MAC address 00:1c:7f:81:06:01 slaves {'eth1-06': '1', 'eth2-06': '2'}, local GW: VS 0 bond2 First slave System MAC address 00:1c:7f:81:06:01 slaves {'eth1-06': '1', 'eth2-06': '2'}&lt;BR /&gt;Feb 6 04:18:53 2024 mho-ch01-01 align_the_local_LACP_bonds_to_chassis_monitor_bonds.py: Skip aligning VS 0 bond3 slaves port numbers since the chassis monitor GW bond slaves and the local GW bond slaves port numbers are identical. Chassis Monitor: VS 0 bond3 First slave System MAC address 00:1c:7f:81:07:01 slaves {'eth1-07': '1', 'eth2-07': '2'}, local GW: VS 0 bond3 First slave System MAC address 00:1c:7f:81:07:01 slaves {'eth1-07': '1', 'eth2-07': '2'}&lt;BR /&gt;Feb 6 04:18:53 2024 mho-ch01-01 align_the_local_LACP_bonds_to_chassis_monitor_bonds.py: Skip aligning VS 0 bond8 slaves port numbers since the chassis monitor GW bond slaves and the local GW bond slaves port numbers are identical. Chassis Monitor: VS 0 bond8 First slave System MAC address 00:1c:7f:81:31:01 slaves {'eth1-49': '1', 'eth2-49': '2'}, local GW: VS 0 bond8 First slave System MAC address 00:1c:7f:81:31:01 slaves {'eth1-49': '1', 'eth2-49': '2'}&lt;BR /&gt;Feb 6 04:18:53 2024 mho-ch01-01 align_the_local_LACP_bonds_to_chassis_monitor_bonds.py: Skip aligning VS 0 bond9 slaves port numbers since the chassis monitor GW bond slaves and the local GW bond slaves port numbers are identical. Chassis Monitor: VS 0 bond9 First slave System MAC address 00:1c:7f:81:35:01 slaves {'eth1-53': '1', 'eth2-53': '2'}, local GW: VS 0 bond9 First slave System MAC address 00:1c:7f:81:35:01 slaves {'eth1-53': '1', 'eth2-53': '2'}&lt;BR /&gt;Feb 6 04:18:53 2024 mho-ch01-01 align_the_local_LACP_bonds_to_chassis_monitor_bonds.py: Skip aligning VS 0 bond10 slaves port numbers since the chassis monitor GW bond slaves and the local GW bond slaves port numbers are identical. Chassis Monitor: VS 0 bond10 First slave System MAC address 00:1c:7f:81:37:01 slaves {'eth1-55': '1', 'eth2-55': '2'}, local GW: VS 0 bond10 First slave System MAC address 00:1c:7f:81:37:01 slaves {'eth1-55': '1', 'eth2-55': '2'}&lt;BR /&gt;Feb 6 04:18:53 2024 mho-ch01-01 align_the_local_LACP_bonds_to_chassis_monitor_bonds.py: Skip aligning VS 0 bond11 slaves port numbers since the chassis monitor GW bond slaves and the local GW bond slaves port numbers are identical. Chassis Monitor: VS 0 bond11 First slave System MAC address 00:1c:7f:81:0e:01 slaves {'eth2-14': '2', 'eth1-14': '1'}, local GW: VS 0 bond11 First slave System MAC address 00:1c:7f:81:0e:01 slaves {'eth2-14': '2', 'eth1-14': '1'}&lt;BR /&gt;Feb 6 04:18:53 2024 mho-ch01-01 align_the_local_LACP_bonds_to_chassis_monitor_bonds.py: Operation was completed&lt;BR /&gt;Feb 6 04:18:53 2024 mho-ch01-01 lbd: align_the_local_LACP_bonds_to_chassis_monitor_bonds result, output: 'Operation was completed&lt;BR /&gt;', exit code: 0&lt;BR /&gt;Feb 6 04:18:54 2024 mho-ch01-01 lbd: Report LACP_SYNC pnote OK, all LACP bonds were successfully verified&lt;BR /&gt;Feb 6 04:19:01 2024 mho-ch01-01 confd: setrlimit(RLIMIT_NOFILE, 512): Operation not permitted&lt;BR /&gt;Feb 6 04:19:01 2024 mho-ch01-01 xpand[66780]: Performing database cloning&lt;BR /&gt;Feb 6 04:19:01 2024 mho-ch01-01 xpand[66780]: load_config&amp;gt;(/config/db/cloned_db,empty)&lt;BR /&gt;Tue Feb 06 01:19:03 PST 2024: [asg_update_diag_config]: merging asg_diag_config from SMO with latest asg_diag_config&lt;BR /&gt;Feb 6 04:19:13 2024 mho-ch01-01 kernel:ssm_com_should_update_switch: update is 1 due to change in present_members: (old) &amp;lt;=&amp;gt; (new)&lt;BR /&gt;Feb 6 04:19:13 2024 mho-ch01-01 lbd: lbd_trap_handler: cxl_is_blade_for_task(LOAD_BALANCE) = 0&lt;BR /&gt;Feb 6 04:19:17 2024 mho-ch01-01 xpand[75311]: Configuration changed from localhost by user admin by the service dbset&lt;BR /&gt;Feb 6 04:19:17 2024 mho-ch01-01 xpand[75311]: Configuration changed from localhost by user admin by the service dbset&lt;BR /&gt;Feb 6 04:19:17 2024 mho-ch01-01 xpand[75311]: Configuration changed from localhost by user admin by the service dbset&lt;BR /&gt;Feb 6 04:19:17 2024 mho-ch01-01 xpand[75311]: Configuration changed from localhost by user admin by the service dbset&lt;BR /&gt;Feb 6 04:19:17 2024 mho-ch01-01 xpand[75311]: Configuration changed from localhost by user admin by the service dbset&lt;BR /&gt;Feb 6 04:19:17 2024 mho-ch01-01 xpand[75311]: Configuration changed from localhost by user admin by the service dbset&lt;BR /&gt;Feb 6 04:19:17 2024 mho-ch01-01 xpand[75311]: Configuration changed from localhost by user admin by the service dbset&lt;BR /&gt;Feb 6 04:19:17 2024 mho-ch01-01 xpand[75311]: Configuration changed from localhost by user admin by the service dbset&lt;BR /&gt;Feb 6 04:19:17 2024 mho-ch01-01 xpand[75311]: Configuration changed from localhost by user admin by the service dbset&lt;BR /&gt;Feb 6 04:19:17 2024 mho-ch01-01 xpand[75311]: Configuration changed from localhost by user admin by the service dbset&lt;BR /&gt;Feb 6 04:19:18 2024 mho-ch01-01 xpand[75311]: Configuration changed from localhost by user admin by the service dbset&lt;BR /&gt;Feb 6 04:19:18 2024 mho-ch01-01 fwk: CLUS-112004-1_01: State change: DOWN -&amp;gt; ACTIVE | Reason: USER DEFINED PNOTE&lt;/P&gt;&lt;P&gt;Feb 6 04:19:18 2024 mho-ch01-01 fwk: CLUS-115704-1_01: State change: DOWN -&amp;gt; ACTIVE | Reason: Member state has been changed due to issue in Virtual System 0&lt;/P&gt;&lt;P&gt;Feb 6 04:19:18 2024 mho-ch01-01 fwk: CLUS-115704-1_01: State change: DOWN -&amp;gt; ACTIVE | Reason: Member state has been changed due to issue in Virtual System 0&lt;/P&gt;&lt;P&gt;Feb 6 04:19:18 2024 mho-ch01-01 fwk: CLUS-115704-1_01: State change: DOWN -&amp;gt; ACTIVE | Reason: Member state has been changed due to issue in Virtual System 0&lt;/P&gt;&lt;P&gt;Feb 6 04:19:18 2024 mho-ch01-01 fwk: CLUS-115704-1_01: State change: DOWN -&amp;gt; ACTIVE | Reason: Member state has been changed due to issue in Virtual System 0&lt;/P&gt;&lt;P&gt;Feb 6 04:19:18 2024 mho-ch01-01 fwk: CLUS-115704-1_01: State change: DOWN -&amp;gt; ACTIVE | Reason: Member state has been changed due to issue in Virtual System 0&lt;/P&gt;&lt;P&gt;Feb 6 04:19:18 2024 mho-ch01-01 xpand[75311]: Configuration changed from localhost by user admin by the service dbset&lt;BR /&gt;Feb 6 04:19:18 2024 mho-ch01-01 xpand[75311]: Configuration changed from localhost by user admin by the service dbset&lt;BR /&gt;Feb 6 04:19:18 2024 mho-ch01-01 xpand[75311]: Configuration changed from localhost by user admin by the service dbset&lt;BR /&gt;Feb 6 04:19:18 2024 mho-ch01-01 kernel:ssm_com_should_update_switch: update is 1 due to change in present_members: (old) &amp;lt;=&amp;gt; (new)&lt;BR /&gt;Feb 6 04:19:18 2024 mho-ch01-01 kernel:[fw4_0];fwkdrv_process_vsx_global_data_in_kernel: Updating smo task to 0&lt;BR /&gt;Feb 6 04:19:18 2024 mho-ch01-01 lbd: lbd_trap_handler: cxl_is_blade_for_task(LOAD_BALANCE) = 1&lt;BR /&gt;Feb 6 04:19:18 2024 mho-ch01-01 sgm_pmd: SGM became port monitoring task&lt;BR /&gt;Feb 6 04:19:18 2024 mho-ch01-01 kernel:ssm_com_should_update_switch: update is 1 due to change in present_members: (old) &amp;lt;=&amp;gt; (new)&lt;BR /&gt;Feb 6 04:19:18 2024 mho-ch01-01 lbd: lbd_trap_handler: cxl_is_blade_for_task(LOAD_BALANCE) = 1&lt;BR /&gt;Feb 6 04:19:18 2024 mho-ch01-01 asg_send_alert: [1_1] SGM 1 in Chassis ID 1 state changed to UP&lt;BR /&gt;Feb 6 04:19:18 2024 mho-ch01-01 sgm_pmd: Updating Orchestrators matrix...&lt;BR /&gt;Feb 6 04:19:18 2024 mho-ch01-01 lbd: lbd_trap_handler: cxl_is_blade_for_task(LOAD_BALANCE) = 1&lt;BR /&gt;Feb 6 04:19:18 2024 mho-ch01-01 asg_send_alert: [1_3] SGM 3 in Chassis ID 1 state changed to DOWN.&lt;BR /&gt;Feb 6 04:19:19 2024 mho-ch01-01 lbd: Local member turned to "blade_for_task".&lt;BR /&gt;Feb 6 04:19:19 2024 mho-ch01-01 lbd: lbd_periodic_handler: lbd_trap_received = 1, lbd_num_of_retry = 0&lt;BR /&gt;Feb 6 04:19:19 2024 mho-ch01-01 lbd: lbd_send_bucket_list: members = 0xf, active_members = 0x9&lt;BR /&gt;Feb 6 04:19:19 2024 mho-ch01-01 sgm_pmd: Setting interface eth1-Mgmt1 link state to up&lt;BR /&gt;Feb 6 04:19:19 2024 mho-ch01-01 sgm_pmd: Setting interface eth1-10 link state to up&lt;BR /&gt;Feb 6 04:19:19 2024 mho-ch01-01 sgm_pmd: Setting interface eth1-12 link state to up&lt;BR /&gt;Feb 6 04:19:19 2024 mho-ch01-01 sgm_pmd: Setting interface eth1-14 link state to up&lt;BR /&gt;Feb 6 04:19:19 2024 mho-ch01-01 sgm_pmd: Setting interface eth1-49 link state to up&lt;BR /&gt;Feb 6 04:19:19 2024 mho-ch01-01 sgm_pmd: Setting interface eth1-05 link state to up&lt;BR /&gt;Feb 6 04:19:19 2024 mho-ch01-01 sgm_pmd: Setting interface eth1-53 link state to up&lt;BR /&gt;Feb 6 04:19:19 2024 mho-ch01-01 sgm_pmd: Setting interface eth1-55 link state to up&lt;BR /&gt;Feb 6 04:19:19 2024 mho-ch01-01 sgm_pmd: Setting interface eth1-06 link state to up&lt;BR /&gt;Feb 6 04:19:19 2024 mho-ch01-01 kernel:ssm_com_should_update_switch: update is 1 due to change in present_members: (old) &amp;lt;=&amp;gt; (new)&lt;BR /&gt;Feb 6 04:19:19 2024 mho-ch01-01 sgm_pmd: Setting interface eth1-07 link state to up&lt;BR /&gt;Feb 6 04:19:19 2024 mho-ch01-01 sgm_pmd: Setting interface eth1-08 link state to up&lt;BR /&gt;Feb 6 04:19:19 2024 mho-ch01-01 asg_send_alert: [1_2] SGM 2 in Chassis ID 1 state changed to UP&lt;BR /&gt;Feb 6 04:19:19 2024 mho-ch01-01 sgm_pmd: Setting interface eth1-09 link state to up&lt;BR /&gt;Feb 6 04:19:19 2024 mho-ch01-01 sgm_pmd: Setting eth1-Sync cluster link state to down(current cluster state is up)&lt;BR /&gt;Feb 6 04:19:19 2024 mho-ch01-01 sgm_pmd: Running: cphaprob link_state set local eth1-Sync -d full -s 10000M down&lt;BR /&gt;Feb 6 04:19:19 2024 mho-ch01-01 sgm_pmd: Setting interface eth2-Mgmt1 link state to up&lt;BR /&gt;Feb 6 04:19:19 2024 mho-ch01-01 sgm_pmd: Setting interface eth2-10 link state to up&lt;BR /&gt;Feb 6 04:19:19 2024 mho-ch01-01 sgm_pmd: Setting interface eth2-12 link state to up&lt;BR /&gt;Feb 6 04:19:19 2024 mho-ch01-01 sgm_pmd: Setting interface eth2-14 link state to up&lt;BR /&gt;Feb 6 04:19:19 2024 mho-ch01-01 lbd: lbd_rest_api_send:: SSM1: Rest request sent successfully.&lt;BR /&gt;Feb 6 04:19:19 2024 mho-ch01-01 sgm_pmd: Setting interface eth2-49 link state to up&lt;BR /&gt;Feb 6 04:19:19 2024 mho-ch01-01 sgm_pmd: Setting interface eth2-05 link state to up&lt;BR /&gt;Feb 6 04:19:19 2024 mho-ch01-01 sgm_pmd: Setting interface eth2-53 link state to up&lt;BR /&gt;Feb 6 04:19:20 2024 mho-ch01-01 sgm_pmd: Setting interface eth2-55 link state to up&lt;BR /&gt;Feb 6 04:19:20 2024 mho-ch01-01 sgm_pmd: Setting interface eth2-06 link state to up&lt;BR /&gt;Feb 6 04:19:20 2024 mho-ch01-01 sgm_pmd: Setting interface eth2-07 link state to up&lt;BR /&gt;Feb 6 04:19:20 2024 mho-ch01-01 sgm_pmd: Setting interface eth2-08 link state to up&lt;BR /&gt;Feb 6 04:19:20 2024 mho-ch01-01 sgm_pmd: Setting interface eth2-09 link state to up&lt;BR /&gt;Feb 6 04:19:20 2024 mho-ch01-01 sgm_pmd: Setting eth2-Sync cluster link state to down(current cluster state is up)&lt;BR /&gt;Feb 6 04:19:20 2024 mho-ch01-01 sgm_pmd: Running: cphaprob link_state set local eth2-Sync -d full -s 10000M down&lt;BR /&gt;Feb 6 04:19:20 2024 mho-ch01-01 lbd: lbd_rest_api_send:: SSM2: Rest request sent successfully.&lt;BR /&gt;Feb 6 04:19:20 2024 mho-ch01-01 lbd: lbd_trap_handler: cxl_is_blade_for_task(LOAD_BALANCE) = 1&lt;BR /&gt;Feb 6 04:19:21 2024 mho-ch01-01 lbd: lbd_periodic_handler: lbd_trap_received = 1, lbd_num_of_retry = 0&lt;BR /&gt;Feb 6 04:19:21 2024 mho-ch01-01 lbd: lbd_send_bucket_list: members = 0xf, active_members = 0xb&lt;BR /&gt;Feb 6 04:19:22 2024 mho-ch01-01 lbd: lbd_rest_api_send:: SSM1: Rest request sent successfully.&lt;BR /&gt;Feb 6 04:19:23 2024 mho-ch01-01 lbd: lbd_rest_api_send:: SSM2: Rest request sent successfully.&lt;BR /&gt;Feb 6 04:19:23 2024 mho-ch01-01 sgm_pmd: Running distutil update...&lt;BR /&gt;Feb 6 04:19:23 2024 mho-ch01-01 sg_xlate: Re-generating /etc/groups&lt;BR /&gt;Feb 6 04:19:23 2024 mho-ch01-01 sg_xlate: /etc/groups: all 1_01,1_02,1_03,1_04&lt;/P&gt;&lt;P&gt;Feb 6 04:19:23 2024 mho-ch01-01 sg_xlate: /etc/groups: chassis1 1_01,1_02,1_03,1_04&lt;/P&gt;&lt;P&gt;Feb 6 04:19:23 2024 mho-ch01-01 sg_xlate: /etc/groups: chassis_active 1_01,1_02&lt;BR /&gt;Feb 6 04:19:34 2024 mho-ch01-01 PAM-tacplus[5407]: auth failed: 2&lt;BR /&gt;Feb 6 04:19:38 2024 mho-ch01-01 kernel:ssm_com_should_update_switch: update is 1 due to change in present_members: (old) &amp;lt;=&amp;gt; (new)&lt;BR /&gt;Feb 6 04:19:38 2024 mho-ch01-01 lbd: lbd_trap_handler: cxl_is_blade_for_task(LOAD_BALANCE) = 1&lt;BR /&gt;Feb 6 04:19:39 2024 mho-ch01-01 lbd: lbd_periodic_handler: lbd_trap_received = 1, lbd_num_of_retry = 0&lt;BR /&gt;Feb 6 04:19:39 2024 mho-ch01-01 lbd: lbd_send_bucket_list: members = 0xf, active_members = 0xb&lt;BR /&gt;Feb 6 04:19:40 2024 mho-ch01-01 lbd: lbd_rest_api_send:: SSM1: Rest request sent successfully.&lt;BR /&gt;Feb 6 04:19:40 2024 mho-ch01-01 lbd: lbd_rest_api_send:: SSM2: Rest request sent successfully.&lt;BR /&gt;Feb 6 04:19:43 2024 mho-ch01-01 kernel:ssm_com_should_update_switch: update is 1 due to change in present_members: (old) &amp;lt;=&amp;gt; (new)&lt;BR /&gt;Feb 6 04:19:43 2024 mho-ch01-01 lbd: lbd_trap_handler: cxl_is_blade_for_task(LOAD_BALANCE) = 1&lt;BR /&gt;Feb 6 04:19:43 2024 mho-ch01-01 lbd: lbd_periodic_handler: lbd_trap_received = 1, lbd_num_of_retry = 0&lt;BR /&gt;Feb 6 04:19:43 2024 mho-ch01-01 lbd: lbd_send_bucket_list: members = 0xf, active_members = 0xb&lt;BR /&gt;Feb 6 04:19:44 2024 mho-ch01-01 lbd: lbd_rest_api_send:: SSM1: Rest request sent successfully.&lt;BR /&gt;Feb 6 04:19:45 2024 mho-ch01-01 lbd: lbd_rest_api_send:: SSM2: Rest request sent successfully.&lt;BR /&gt;Feb 6 04:19:45 2024 mho-ch01-01 expert: SSH connection by admin user to Expert Shell with client IP 10.1.1.1 at 01:19 02/06/2024&lt;BR /&gt;Feb 6 04:19:58 2024 mho-ch01-01 lbd: lbd_trap_handler: cxl_is_blade_for_task(LOAD_BALANCE) = 1&lt;BR /&gt;Feb 6 04:19:58 2024 mho-ch01-01 kernel:ssm_com_should_update_switch: update is 1 due to change in present_members: (old) &amp;lt;=&amp;gt; (new)&lt;BR /&gt;Feb 6 04:19:58 2024 mho-ch01-01 asg_send_alert: [1_3] SGM 3 in Chassis ID 1 state changed to UP&lt;BR /&gt;Feb 6 04:19:59 2024 mho-ch01-01 lbd: lbd_periodic_handler: lbd_trap_received = 1, lbd_num_of_retry = 0&lt;BR /&gt;Feb 6 04:19:59 2024 mho-ch01-01 lbd: lbd_send_bucket_list: members = 0xf, active_members = 0xf&lt;BR /&gt;Feb 6 04:20:00 2024 mho-ch01-01 lbd: lbd_rest_api_send:: SSM1: Rest request sent successfully.&lt;BR /&gt;Feb 6 04:20:00 2024 mho-ch01-01 lbd: lbd_rest_api_send:: SSM2: Rest request sent successfully.&lt;/P&gt;&lt;P&gt;-----------------------------------------------------------------------------------------------------------------------------------------------&lt;BR /&gt;#fwk.elg&lt;/P&gt;&lt;P&gt;[6 Feb 4:18:38][fw4_0];[vs_0];CLUS-120103-1_01: VSX PNOTE ON&lt;BR /&gt;[6 Feb 4:18:38][fw4_0];[vs_0];CLUS-111500-1_01: State change: ACTIVE -&amp;gt; DOWN | Reason: VSX PNOTE due to problem in Virtual System 5&lt;BR /&gt;[6 Feb 4:18:38][fw4_0];[vs_0];FW-1: [BLADE]: Blade 1_1 changed Blade 1_1's state: ACTIVE ---&amp;gt; DOWN (cluster state changed to: DOWN)&lt;BR /&gt;[6 Feb 4:18:38][fw4_0];[vs_0];FW-1: [SMO]: Current blade (1_1) is no longer SMO Mgmt Master (on 99964150)&lt;BR /&gt;[6 Feb 4:18:38][fw4_0];[vs_0];FW-1: fwha_update_member_running_dr_task: Changed DR manager from member (1_01) to (1_02)&lt;BR /&gt;[6 Feb 4:18:38][fw4_0];[vs_0];FW-1: fwha_update_member_running_smo_task: Changed fwha_smo_member from member (1_01) to (1_02)&lt;BR /&gt;[6 Feb 4:18:39][fw4_0];[vs_0];CLUS-100101-1_01: Failover member 1_01 | Reason: VSX PNOTE&lt;BR /&gt;[6 Feb 4:18:40][fw4_0];[vs_0];FW-1: [BLADE]: Blade 1_1 changed Blade 1_2's state: ACTIVE ---&amp;gt; DOWN (State: ACTIVE -&amp;gt; DOWN.)&lt;BR /&gt;[6 Feb 4:18:40][fw4_0];[vs_0];FW-1: fwha_update_member_running_dr_task: Changed DR manager from member (1_02) to (1_03)&lt;BR /&gt;[6 Feb 4:18:40][fw4_0];[vs_0];FW-1: fwha_update_member_running_smo_task: Changed fwha_smo_member from member (1_02) to (1_03)&lt;BR /&gt;[6 Feb 4:18:40][fw4_0];[vs_0];CLUS-211500-1_01: Remote member 1_02 (state ACTIVE -&amp;gt; DOWN) | Reason: VSX PNOTE&lt;BR /&gt;[6 Feb 4:18:40][fw4_0];[vs_0];CLUS-100202-1_01: Failover member 1_02 | Reason: Available on member 1_02&lt;BR /&gt;[6 Feb 4:18:42][fw4_0];[vs_0];FW-1: [BLADE]: Blade 1_1 changed Blade 1_4's state: ACTIVE ---&amp;gt; DOWN (State: ACTIVE -&amp;gt; DOWN.)&lt;BR /&gt;[6 Feb 4:18:42][fw4_0];[vs_0];CLUS-211500-1_01: Remote member 1_04 (state ACTIVE -&amp;gt; DOWN) | Reason: VSX PNOTE&lt;BR /&gt;[6 Feb 4:18:42][fw4_0];[vs_0];CLUS-100404-1_01: Failover member 1_04 | Reason: Available on member 1_04&lt;BR /&gt;[6 Feb 4:18:43][fw4_0];[vs_0];CLUS-211505-1_01: Remote member 1_03 (state ACTIVE -&amp;gt; ACTIVE(!)) | Reason: VSX PNOTE&lt;BR /&gt;[6 Feb 4:18:48][fw4_0];[vs_0];CLUS-214904-1_01: Remote member 1_03 (state ACTIVE(!) -&amp;gt; ACTIVE) | Reason: Reason for ACTIVE! alert has been resolved&lt;BR /&gt;[6 Feb 4:18:52][fw4_0];[vs_0];CLUS-120103-1_01: VSX PNOTE OFF&lt;BR /&gt;[6 Feb 4:18:52][fw4_0];[vs_0];report_lacp_sync_pnote_problem: registering LACP_SYNC pnote with problem&lt;BR /&gt;[6 Feb 4:18:52][fw4_0];[vs_0];CLUS-120109-1_01: LACP_SYNC PNOTE ON&lt;BR /&gt;[6 Feb 4:18:52][fw4_0];[vs_0];FW-1: [BLADE]: Blade 1_1 changed Blade 1_1's state: READY ---&amp;gt; DOWN (cluster state changed to: DOWN)&lt;BR /&gt;[6 Feb 4:18:54][fw4_0];[vs_0];CLUS-120109-1_01: LACP_SYNC PNOTE OFF&lt;BR /&gt;[6 Feb 4:18:54][fw4_0];[vs_0];fwha_ch_should_pull_config: registering pull_config pnote with problem&lt;BR /&gt;[6 Feb 4:18:54][fw4_0];[vs_0];CLUS-120109-1_01: pull_config PNOTE ON&lt;BR /&gt;[6 Feb 4:18:54][fw4_0];[vs_0];FW-1: [BLADE]: Blade 1_1 changed Blade 1_1's state: READY ---&amp;gt; DOWN (cluster state changed to: DOWN)&lt;BR /&gt;[6 Feb 4:18:54][fw4_0];[vs_0];fwha_ch_should_pull_config: pulling configuration for local SGM&lt;BR /&gt;[6 Feb 4:18:56][fw4_0];[vs_0];fwha_mbs_pull_config_start_notify_cb: pull config was initiated&lt;BR /&gt;[6 Feb 4:18:56][fw4_0];[vs_0];fwha_mbs_pull_config_start_notify_cb: waiting for all VSs pull config to be done before going up&lt;BR /&gt;[6 Feb 4:19:12][fw4_0];[vs_0];FW-1: [BLADE]: Blade 1_1 changed Blade 1_4's state: DOWN ---&amp;gt; READY (State: DOWN -&amp;gt; READY.)&lt;BR /&gt;[6 Feb 4:19:12][fw4_0];[vs_0];CLUS-216703-1_01: Remote member 1_04 (state DOWN -&amp;gt; ACTIVE*) | Reason: Trying to move to ACTIVE state&lt;BR /&gt;[6 Feb 4:19:12][fw4_0];[vs_0];FW-1: [BLADE]: Blade 1_1 changed Blade 1_4's state: READY ---&amp;gt; ACTIVE (State: READY -&amp;gt; ACTIVE.)&lt;BR /&gt;[6 Feb 4:19:12][fw4_0];[vs_0];CLUS-212004-1_01: Remote member 1_04 (state ACTIVE* -&amp;gt; ACTIVE) | Reason: USER DEFINED PNOTE&lt;BR /&gt;[6 Feb 4:19:17][fw4_0];[vs_0];fwha_mbs_pull_config_done_notify_cb: VS 1 finished pull config&lt;BR /&gt;[6 Feb 4:19:17][fw4_0];[vs_0];fwha_mbs_pull_config_done_notify_cb: VS 2 finished pull config&lt;BR /&gt;[6 Feb 4:19:17][fw4_0];[vs_0];fwha_mbs_pull_config_done_notify_cb: VS 3 finished pull config&lt;BR /&gt;[6 Feb 4:19:18][fw4_0];[vs_0];fwha_mbs_pull_config_done_notify_cb: VS 5 finished pull config&lt;BR /&gt;[6 Feb 4:19:18][fw4_0];[vs_0];fwha_mbs_pull_config_done_notify_cb: VS 4 finished pull config&lt;BR /&gt;[6 Feb 4:19:18][fw4_0];[vs_0];fwha_mbs_pull_config_done_notify_cb: all VSs finished pull config&lt;BR /&gt;[6 Feb 4:19:18][fw4_0];[vs_0];fwha_mbs_pull_config_done_notify_cb: pull config is done&lt;BR /&gt;[6 Feb 4:19:18][fw4_0];[vs_0];fwha_mbs_pull_config_periodic: fwha_mbs_pull_config_done = 1, removing pnote&lt;BR /&gt;[6 Feb 4:19:18][fw4_0];[vs_0];fwha_mbs_pull_config_periodic: unregistering pull_config pnote&lt;BR /&gt;[6 Feb 4:19:18][fw4_0];[vs_0];CLUS-120109-1_01: pull_config PNOTE OFF&lt;BR /&gt;[6 Feb 4:19:18][fw4_0];[vs_0];fwha_mbs_pull_config_periodic: fwha_mbs_pull_config_done = 1, removing pnote&lt;BR /&gt;[6 Feb 4:19:18][fw4_0];[vs_0];fwha_mbs_pull_config_periodic: unregistering pull_config pnote&lt;BR /&gt;[6 Feb 4:19:18][fw4_0];[vs_0];FW-1: [BLADE]: Blade 1_1 changed Blade 1_1's state: DOWN ---&amp;gt; READY (cluster state changed to: READY)&lt;BR /&gt;[6 Feb 4:19:18][fw4_0];[vs_0];CLUS-112004-1_01: State change: DOWN -&amp;gt; ACTIVE | Reason: USER DEFINED PNOTE&lt;BR /&gt;[6 Feb 4:19:18][fw4_0];[vs_0];FW-1: [BLADE]: Blade 1_1 changed Blade 1_1's state: READY ---&amp;gt; ACTIVE (cluster state changed to: ACTIVE)&lt;BR /&gt;[6 Feb 4:19:18][fw4_0];[vs_0];FW-1: [SMO]: Current blade (1_1) is now SMO Mgmt Master (on 99964543)&lt;BR /&gt;[6 Feb 4:19:18][fw4_0];[vs_0];FW-1: fwha_assure_system_stability_for_df: Local Chassis grade: 122 -&amp;gt; 128(param_change 1)&lt;BR /&gt;[6 Feb 4:19:18][fw4_0];[vs_0];FW-1: fwha_update_member_running_smo_task: Changed fwha_smo_member from member (1_03) to (1_01)&lt;BR /&gt;[6 Feb 4:19:18][fw4_0];[vs_0];FW-1: fwha_update_member_running_dr_task: Changed DR manager from member (1_03) to (1_01)&lt;BR /&gt;[6 Feb 4:19:18][fw4_0];[vs_0];FW-1: [BLADE]: Blade 1_1 changed Blade 1_3's state: ACTIVE ---&amp;gt; DOWN (State: ACTIVE -&amp;gt; DOWN.)&lt;BR /&gt;[6 Feb 4:19:18][fw4_0];[vs_0];CLUS-211500-1_01: Remote member 1_03 (state ACTIVE -&amp;gt; DOWN) | Reason: VSX PNOTE&lt;BR /&gt;[6 Feb 4:19:18][fw4_0];[vs_0];CLUS-100303-1_01: Failover member 1_03 | Reason: Available on member 1_03&lt;BR /&gt;[6 Feb 4:19:18][fw4_0];[vs_0];FW-1: fwha_assure_system_stability_for_df: Local Chassis grade: 128 -&amp;gt; 122(param_change 1)&lt;BR /&gt;[6 Feb 4:19:19][fw4_0];[vs_0];FW-1: [BLADE]: Blade 1_1 changed Blade 1_2's state: DOWN ---&amp;gt; READY (State: DOWN -&amp;gt; READY.)&lt;BR /&gt;[6 Feb 4:19:19][fw4_0];[vs_0];CLUS-216703-1_01: Remote member 1_02 (state DOWN -&amp;gt; ACTIVE*) | Reason: Trying to move to ACTIVE state&lt;BR /&gt;[6 Feb 4:19:19][fw4_0];[vs_0];FW-1: [BLADE]: Blade 1_1 changed Blade 1_2's state: READY ---&amp;gt; ACTIVE (State: READY -&amp;gt; ACTIVE.)&lt;BR /&gt;[6 Feb 4:19:19][fw4_0];[vs_0];CLUS-212004-1_01: Remote member 1_02 (state ACTIVE* -&amp;gt; ACTIVE) | Reason: USER DEFINED PNOTE&lt;BR /&gt;[6 Feb 4:19:19][fw4_0];[vs_0];FW-1: fwha_assure_system_stability_for_df: Local Chassis grade: 122 -&amp;gt; 128(param_change 1)&lt;BR /&gt;[6 Feb 4:19:58][fw4_0];[vs_0];FW-1: [BLADE]: Blade 1_1 changed Blade 1_3's state: DOWN ---&amp;gt; READY (State: DOWN -&amp;gt; READY.)&lt;BR /&gt;[6 Feb 4:19:58][fw4_0];[vs_0];CLUS-216703-1_01: Remote member 1_03 (state DOWN -&amp;gt; ACTIVE*) | Reason: Trying to move to ACTIVE state&lt;BR /&gt;[6 Feb 4:19:58][fw4_0];[vs_0];FW-1: [BLADE]: Blade 1_1 changed Blade 1_3's state: READY ---&amp;gt; ACTIVE (State: READY -&amp;gt; ACTIVE.)&lt;BR /&gt;[6 Feb 4:19:58][fw4_0];[vs_0];CLUS-212004-1_01: Remote member 1_03 (state ACTIVE* -&amp;gt; ACTIVE) | Reason: USER DEFINED PNOTE&lt;BR /&gt;[6 Feb 4:19:58][fw4_0];[vs_0];FW-1: fwha_assure_system_stability_for_df: Local Chassis grade: 128 -&amp;gt; 134(param_change 1)&lt;/P&gt;</description>
      <pubDate>Tue, 06 Feb 2024 10:34:21 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Hyperscale-Firewall-Maestro/maestro-vsx-pnotes/m-p/205135#M2416</guid>
      <dc:creator>wsitu</dc:creator>
      <dc:date>2024-02-06T10:34:21Z</dc:date>
    </item>
    <item>
      <title>Re: maestro vsx pnotes</title>
      <link>https://community.checkpoint.com/t5/Hyperscale-Firewall-Maestro/maestro-vsx-pnotes/m-p/205211#M2417</link>
      <description>&lt;P&gt;I would recommend opening a TAC case for further troubleshooting and finding a solution.&lt;/P&gt;</description>
      <pubDate>Tue, 06 Feb 2024 19:06:44 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Hyperscale-Firewall-Maestro/maestro-vsx-pnotes/m-p/205211#M2417</guid>
      <dc:creator>Lari_Luoma</dc:creator>
      <dc:date>2024-02-06T19:06:44Z</dc:date>
    </item>
    <item>
      <title>Re: maestro vsx pnotes</title>
      <link>https://community.checkpoint.com/t5/Hyperscale-Firewall-Maestro/maestro-vsx-pnotes/m-p/227755#M2879</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.checkpoint.com/t5/user/viewprofilepage/user-id/79000"&gt;@wsitu&lt;/a&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I would like to enquire, have you got any answer from the TAC?&lt;/P&gt;
&lt;P&gt;Akos&lt;/P&gt;</description>
      <pubDate>Tue, 24 Sep 2024 12:46:42 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Hyperscale-Firewall-Maestro/maestro-vsx-pnotes/m-p/227755#M2879</guid>
      <dc:creator>AkosBakos</dc:creator>
      <dc:date>2024-09-24T12:46:42Z</dc:date>
    </item>
    <item>
      <title>Re: maestro vsx pnotes</title>
      <link>https://community.checkpoint.com/t5/Hyperscale-Firewall-Maestro/maestro-vsx-pnotes/m-p/227806#M2880</link>
      <description>&lt;P&gt;the issue was resolved with Jumbo 41.&amp;nbsp;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 24 Sep 2024 18:16:43 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Hyperscale-Firewall-Maestro/maestro-vsx-pnotes/m-p/227806#M2880</guid>
      <dc:creator>wsitu</dc:creator>
      <dc:date>2024-09-24T18:16:43Z</dc:date>
    </item>
    <item>
      <title>Re: maestro vsx pnotes</title>
      <link>https://community.checkpoint.com/t5/Hyperscale-Firewall-Maestro/maestro-vsx-pnotes/m-p/227849#M2881</link>
      <description>&lt;P&gt;Thx&amp;nbsp;&lt;a href="https://community.checkpoint.com/t5/user/viewprofilepage/user-id/79000"&gt;@wsitu&lt;/a&gt;&amp;nbsp;!&lt;/P&gt;</description>
      <pubDate>Wed, 25 Sep 2024 08:04:40 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Hyperscale-Firewall-Maestro/maestro-vsx-pnotes/m-p/227849#M2881</guid>
      <dc:creator>AkosBakos</dc:creator>
      <dc:date>2024-09-25T08:04:40Z</dc:date>
    </item>
    <item>
      <title>Re: maestro vsx pnotes</title>
      <link>https://community.checkpoint.com/t5/Hyperscale-Firewall-Maestro/maestro-vsx-pnotes/m-p/228467#M2888</link>
      <description>&lt;P&gt;Hi!&lt;/P&gt;&lt;P&gt;We're seeing similar messages. Did TAC mention why the issue occured in the first place?&lt;/P&gt;</description>
      <pubDate>Mon, 30 Sep 2024 13:47:40 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Hyperscale-Firewall-Maestro/maestro-vsx-pnotes/m-p/228467#M2888</guid>
      <dc:creator>kamilazat</dc:creator>
      <dc:date>2024-09-30T13:47:40Z</dc:date>
    </item>
    <item>
      <title>Re: maestro vsx pnotes</title>
      <link>https://community.checkpoint.com/t5/Hyperscale-Firewall-Maestro/maestro-vsx-pnotes/m-p/228514#M2889</link>
      <description>&lt;P&gt;it was escalated to R&amp;amp;D.&amp;nbsp; we did not receive a response regarding the root cause.&amp;nbsp; since the resolution was simply Jumbo upgrade, we closed the case.&lt;/P&gt;</description>
      <pubDate>Mon, 30 Sep 2024 16:43:22 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/Hyperscale-Firewall-Maestro/maestro-vsx-pnotes/m-p/228514#M2889</guid>
      <dc:creator>wsitu</dc:creator>
      <dc:date>2024-09-30T16:43:22Z</dc:date>
    </item>
  </channel>
</rss>

