<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: ClusterXl synchronization network bandwidth in General Topics</title>
    <link>https://community.checkpoint.com/t5/General-Topics/ClusterXl-synchronization-network-bandwidth/m-p/108806#M20705</link>
    <description>&lt;P&gt;LACP, Do Not imply that you using a single switch.&lt;/P&gt;&lt;P&gt;LACP could be used to a switch stack&lt;BR /&gt;Also most datacenter switches uses functions such as VPC (Cisco virtual port channel) allowing you to build LACP to diff physical switches that are within a VPC pair.&lt;/P&gt;</description>
    <pubDate>Mon, 25 Jan 2021 23:13:06 GMT</pubDate>
    <dc:creator>Magnus-Holmberg</dc:creator>
    <dc:date>2021-01-25T23:13:06Z</dc:date>
    <item>
      <title>ClusterXl synchronization network bandwidth</title>
      <link>https://community.checkpoint.com/t5/General-Topics/ClusterXl-synchronization-network-bandwidth/m-p/59337#M11977</link>
      <description>&lt;P&gt;Hello everyone.&lt;/P&gt;&lt;P&gt;I would like to ask if on CheckPoint ClusterXL (2 Gaia R80.10 gateways) working in HA (active/standby) mode sychronization interface/network can have less bandwith than "production" interfaces. In documentation I have found that for sync only distance (delay) matters. For example - can I use 10 Gbit links for DMZ, internal and external networks and "only" 1 Gbit for sync interface? Or maybe I would use 4x1 Gbit (bond) if 1 Gbit link is insufficient?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks for Your precious help&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 31 Jul 2019 11:12:31 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/ClusterXl-synchronization-network-bandwidth/m-p/59337#M11977</guid>
      <dc:creator>CheckMate-R77</dc:creator>
      <dc:date>2019-07-31T11:12:31Z</dc:date>
    </item>
    <item>
      <title>Re: ClusterXl synchronization network bandwidth</title>
      <link>https://community.checkpoint.com/t5/General-Topics/ClusterXl-synchronization-network-bandwidth/m-p/59341#M11980</link>
      <description>&lt;P&gt;I have also found that (&lt;A href="https://sc1.checkpoint.com/documents/R80.10/WebAdminGuides/EN/CP_R80.10_ClusterXL_AdminGuide/html_frameset.htm" target="_blank" rel="noopener"&gt;https://sc1.checkpoint.com/documents/R80.10/WebAdminGuides/EN/CP_R80.10_ClusterXL_AdminGuide/html_frameset.htm&lt;/A&gt;&lt;span class="lia-unicode-emoji" title=":disappointed_face:"&gt;😞&lt;/span&gt;&lt;/P&gt;&lt;P&gt;"Note&lt;STRONG&gt; -&lt;/STRONG&gt; There is no requirement for throughput of Sync interface to be identical to, or larger than throughput of traffic interfaces (although, to prevent a possible bottle neck, a good practice for throughput of Sync interface is to be at least identical to throughput of traffic interfaces)."&lt;/P&gt;&lt;P&gt;So there can be bottle neck in my link configuration - I think especially in &lt;EM&gt;Full Sync&lt;/EM&gt; transfers :-(.&lt;/P&gt;</description>
      <pubDate>Wed, 31 Jul 2019 11:38:20 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/ClusterXl-synchronization-network-bandwidth/m-p/59341#M11980</guid>
      <dc:creator>CheckMate-R77</dc:creator>
      <dc:date>2019-07-31T11:38:20Z</dc:date>
    </item>
    <item>
      <title>Re: ClusterXl synchronization network bandwidth</title>
      <link>https://community.checkpoint.com/t5/General-Topics/ClusterXl-synchronization-network-bandwidth/m-p/59342#M11981</link>
      <description>2x1GB for sync should suffice.</description>
      <pubDate>Wed, 31 Jul 2019 11:40:55 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/ClusterXl-synchronization-network-bandwidth/m-p/59342#M11981</guid>
      <dc:creator>PhoneBoy</dc:creator>
      <dc:date>2019-07-31T11:40:55Z</dc:date>
    </item>
    <item>
      <title>Re: ClusterXl synchronization network bandwidth</title>
      <link>https://community.checkpoint.com/t5/General-Topics/ClusterXl-synchronization-network-bandwidth/m-p/59343#M11982</link>
      <description>&lt;P&gt;No, it will not.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;LACP will not give you 2x1=2Gbps, because the balancing is per pair of IPs, and and the IPs are always the same. You will have 1 Gbps available for sync there.&lt;/P&gt;</description>
      <pubDate>Wed, 31 Jul 2019 11:50:26 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/ClusterXl-synchronization-network-bandwidth/m-p/59343#M11982</guid>
      <dc:creator>_Val_</dc:creator>
      <dc:date>2019-07-31T11:50:26Z</dc:date>
    </item>
    <item>
      <title>Re: ClusterXl synchronization network bandwidth</title>
      <link>https://community.checkpoint.com/t5/General-Topics/ClusterXl-synchronization-network-bandwidth/m-p/59344#M11983</link>
      <description>&lt;P&gt;&lt;a href="https://community.checkpoint.com/t5/user/viewprofilepage/user-id/6359"&gt;@CheckMate-R77&lt;/a&gt;&amp;nbsp;, as you have quoted already, there are two types of synchronisation: full and delta sync. Although full sync is extensive, it is not equivalent to the passing traffic, it is just transferring all kernel tables as is from one member to another. It also can lag a bit, delaying full functionality of the cluster but not affecting production traffic. However, delta sync is a direct function of production traffic, and it is time sensitive.&lt;/P&gt;
&lt;P&gt;There is no exact formula to calculate the required bandwidth, but it is assumed that you might need between 10 and 30% of your production bandwidth. You can have a limited control over delta sync by disabling ore delaying sync for specific services, but it does not give you lots of flexibility anyway.&lt;/P&gt;
&lt;P&gt;In your specific case I would advise to use 10Gbps interface for sync to be on the safe side. Mind you may try bonded interfaces, but as stated in a different comment to this post, the nature of LACP does not allow you to multiply bandwidth in this specific case.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 31 Jul 2019 11:56:57 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/ClusterXl-synchronization-network-bandwidth/m-p/59344#M11983</guid>
      <dc:creator>_Val_</dc:creator>
      <dc:date>2019-07-31T11:56:57Z</dc:date>
    </item>
    <item>
      <title>Re: ClusterXl synchronization network bandwidth</title>
      <link>https://community.checkpoint.com/t5/General-Topics/ClusterXl-synchronization-network-bandwidth/m-p/59380#M11988</link>
      <description>&lt;P&gt;I completely agree with &lt;a href="https://community.checkpoint.com/t5/user/viewprofilepage/user-id/181"&gt;@_Val_&lt;/a&gt;&amp;nbsp; here.&lt;/P&gt;
&lt;P&gt;LACP is only used with the sync interface to make the sync fail-safe. In the beginning you could define several sync interfaces. That's no longer possible. &lt;/P&gt;
&lt;P&gt;If you want to be 100% safe, you have to use two 10 GBit/s interfaces as bond for the sync.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 31 Jul 2019 15:24:10 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/ClusterXl-synchronization-network-bandwidth/m-p/59380#M11988</guid>
      <dc:creator>HeikoAnkenbrand</dc:creator>
      <dc:date>2019-07-31T15:24:10Z</dc:date>
    </item>
    <item>
      <title>Re: ClusterXl synchronization network bandwidth</title>
      <link>https://community.checkpoint.com/t5/General-Topics/ClusterXl-synchronization-network-bandwidth/m-p/59391#M11991</link>
      <description>&lt;P&gt;I never saw more then 800Mbit/s on a sync link. And these value was only seen with IPSO-clustering in forwarding mode and four fully utilized 10 GB interfaces. And these utilization was only seen in case of a full sync.&lt;/P&gt;&lt;P&gt;We had a some clusters running with heavy utilized 10GB links and 2x 1GB bond active/passive as sync. Highest sync utilization ever was 450 MBit/s .&lt;/P&gt;&lt;P&gt;That‘s my experience but m&lt;SPAN&gt;aybee someone can show as some more production throughput on a sync interface.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;Wolfgang&lt;/P&gt;</description>
      <pubDate>Wed, 31 Jul 2019 20:41:28 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/ClusterXl-synchronization-network-bandwidth/m-p/59391#M11991</guid>
      <dc:creator>Wolfgang</dc:creator>
      <dc:date>2019-07-31T20:41:28Z</dc:date>
    </item>
    <item>
      <title>Re: ClusterXl synchronization network bandwidth</title>
      <link>https://community.checkpoint.com/t5/General-Topics/ClusterXl-synchronization-network-bandwidth/m-p/59397#M11992</link>
      <description>&lt;P&gt;I agree with you&amp;nbsp;&lt;a href="https://community.checkpoint.com/t5/user/viewprofilepage/user-id/1447"&gt;@Wolfgang&lt;/a&gt;.&lt;/P&gt;
&lt;P&gt;In practice, I haven't seen a firewall that has generated more than 1GBit/s sync traffic.&lt;/P&gt;
&lt;P&gt;But if you want to be on the safe side, you have to use two 10Git/s interfaces as bond.&lt;/P&gt;
&lt;P&gt;PS:&amp;nbsp;But I also have several firewalls running with two 1 Gbit/s sync interfaces as LACP bond:-)&lt;/P&gt;</description>
      <pubDate>Wed, 31 Jul 2019 21:21:30 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/ClusterXl-synchronization-network-bandwidth/m-p/59397#M11992</guid>
      <dc:creator>HeikoAnkenbrand</dc:creator>
      <dc:date>2019-07-31T21:21:30Z</dc:date>
    </item>
    <item>
      <title>Re: ClusterXl synchronization network bandwidth</title>
      <link>https://community.checkpoint.com/t5/General-Topics/ClusterXl-synchronization-network-bandwidth/m-p/59398#M11993</link>
      <description>&lt;P&gt;Interesting would be here what R&amp;amp;D recommends:-)&lt;/P&gt;
&lt;P&gt;It's just an idea. Maybe you can calculate this with the connections which are in the stat tabel. Is there a rule of thumb here?&lt;/P&gt;</description>
      <pubDate>Wed, 31 Jul 2019 21:30:53 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/ClusterXl-synchronization-network-bandwidth/m-p/59398#M11993</guid>
      <dc:creator>HeikoAnkenbrand</dc:creator>
      <dc:date>2019-07-31T21:30:53Z</dc:date>
    </item>
    <item>
      <title>Re: ClusterXl synchronization network bandwidth</title>
      <link>https://community.checkpoint.com/t5/General-Topics/ClusterXl-synchronization-network-bandwidth/m-p/59402#M11995</link>
      <description>&lt;P&gt;In my experience 1 Gbit is sufficient for cluster state table sync unless the cluster has an extremely high new connection rate passing through it.&amp;nbsp; Loss and re-transmits on the sync interface as reported by &lt;STRONG&gt;cphaprob syncstat&lt;/STRONG&gt; are typically caused by overall high CPU load on the cluster members, not by a lack of raw bandwidth on the sync interface.&amp;nbsp; High CPU load can be mitigated with CoreXL/SecureXL tuning as described in my "Max Power" book, as long as the firewall hardware was sized appropriately.&amp;nbsp; By selectively disabling synchronization for services such as DNS, HTTP, and HTTPS the amount of sync traffic (and associated CPU utilization) can be reduced significantly.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 01 Aug 2019 02:14:23 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/ClusterXl-synchronization-network-bandwidth/m-p/59402#M11995</guid>
      <dc:creator>Timothy_Hall</dc:creator>
      <dc:date>2019-08-01T02:14:23Z</dc:date>
    </item>
    <item>
      <title>Re: ClusterXl synchronization network bandwidth</title>
      <link>https://community.checkpoint.com/t5/General-Topics/ClusterXl-synchronization-network-bandwidth/m-p/59475#M12018</link>
      <description>&lt;P&gt;What a great discussion we have here. I think I'll follow CheckPoint recommendation anyway.&lt;/P&gt;&lt;P&gt;By the way it would be very interesting to test it in CP lab.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thank You all&lt;/P&gt;</description>
      <pubDate>Fri, 02 Aug 2019 08:12:40 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/ClusterXl-synchronization-network-bandwidth/m-p/59475#M12018</guid>
      <dc:creator>CheckMate-R77</dc:creator>
      <dc:date>2019-08-02T08:12:40Z</dc:date>
    </item>
    <item>
      <title>Re: ClusterXl synchronization network bandwidth</title>
      <link>https://community.checkpoint.com/t5/General-Topics/ClusterXl-synchronization-network-bandwidth/m-p/59479#M12020</link>
      <description>&lt;P&gt;I am glad you guys have never seen an issue with a sync interface being a bottleneck. I did, in a couple of very special VSX related cases. That does not say you are safe with a physical cluster.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 02 Aug 2019 10:41:20 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/ClusterXl-synchronization-network-bandwidth/m-p/59479#M12020</guid>
      <dc:creator>_Val_</dc:creator>
      <dc:date>2019-08-02T10:41:20Z</dc:date>
    </item>
    <item>
      <title>Re: ClusterXl synchronization network bandwidth</title>
      <link>https://community.checkpoint.com/t5/General-Topics/ClusterXl-synchronization-network-bandwidth/m-p/97724#M19158</link>
      <description>&lt;P&gt;Hello,&amp;nbsp;&lt;/P&gt;&lt;P&gt;Sorry I have a question. How do you know or how can you know the utilizacion over sync interface in VSX?&amp;nbsp;&lt;/P&gt;&lt;P&gt;Any command? I try to use SmartMonitor but I dont found.&amp;nbsp;&lt;/P&gt;&lt;P&gt;Regards,&amp;nbsp;&lt;/P&gt;&lt;P&gt;Julian S.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 28 Sep 2020 14:39:42 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/ClusterXl-synchronization-network-bandwidth/m-p/97724#M19158</guid>
      <dc:creator>Julian_Sanchez</dc:creator>
      <dc:date>2020-09-28T14:39:42Z</dc:date>
    </item>
    <item>
      <title>Re: ClusterXl synchronization network bandwidth</title>
      <link>https://community.checkpoint.com/t5/General-Topics/ClusterXl-synchronization-network-bandwidth/m-p/97732#M19159</link>
      <description>&lt;P&gt;The rule of thumb is to calculate it based on overall bandwidth utilization of your VSX cluster. We are talking about up to 5% of overall bandwidth. If you want o be on the safe side, take up to 10%.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 28 Sep 2020 15:15:43 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/ClusterXl-synchronization-network-bandwidth/m-p/97732#M19159</guid>
      <dc:creator>_Val_</dc:creator>
      <dc:date>2020-09-28T15:15:43Z</dc:date>
    </item>
    <item>
      <title>Re: ClusterXl synchronization network bandwidth</title>
      <link>https://community.checkpoint.com/t5/General-Topics/ClusterXl-synchronization-network-bandwidth/m-p/108792#M20698</link>
      <description>&lt;P&gt;Hi Tim,&lt;/P&gt;
&lt;P&gt;I know this subject has been talked about in the past, but can you pitch in with your opinion on current versions preference for either direct link or via-switch connectivity for Sync interfaces?&lt;/P&gt;
&lt;P&gt;I too fail to justify 10G or multiple 10G links for the state table sync unless it is used for some-kind of large IoT environment.&lt;/P&gt;</description>
      <pubDate>Mon, 25 Jan 2021 18:50:50 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/ClusterXl-synchronization-network-bandwidth/m-p/108792#M20698</guid>
      <dc:creator>Vladimir</dc:creator>
      <dc:date>2021-01-25T18:50:50Z</dc:date>
    </item>
    <item>
      <title>Re: ClusterXl synchronization network bandwidth</title>
      <link>https://community.checkpoint.com/t5/General-Topics/ClusterXl-synchronization-network-bandwidth/m-p/108794#M20699</link>
      <description>&lt;P&gt;This actually came up just the other day &lt;A href="https://community.checkpoint.com/t5/Security-Gateways/Best-Practice-for-HA-sync-interface/m-p/108572#M14651" target="_self"&gt;in another thread&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;I would concur with the people here that LACP on a sync bond is not useful, but round robin works perfectly and gives you the full throughput of all the interfaces you add to the bond. I wouldn't recommend LACP anyway because that implies a single switch carrying all the sync traffic. If it fails, both members will see all of their sync ports go down, and bad things will happen. A bond with round robin transmission can go through two totally separate switches which don't know about each other. That lets you lose a switch without the cluster caring.&lt;/P&gt;</description>
      <pubDate>Mon, 25 Jan 2021 19:25:08 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/ClusterXl-synchronization-network-bandwidth/m-p/108794#M20699</guid>
      <dc:creator>Bob_Zimmerman</dc:creator>
      <dc:date>2021-01-25T19:25:08Z</dc:date>
    </item>
    <item>
      <title>Re: ClusterXl synchronization network bandwidth</title>
      <link>https://community.checkpoint.com/t5/General-Topics/ClusterXl-synchronization-network-bandwidth/m-p/108797#M20700</link>
      <description>&lt;P&gt;In short, a direct cabling for the sync interface in ClusterXL is fine now but it didn't used to be.&amp;nbsp; The following is from memory which is a bit hazy since it was so long ago.&lt;/P&gt;
&lt;P&gt;The long answer is that a hub/switch was required on the Nokia IPSO boxes when they were using Nokia Clustering (active-active basically, not VRRP) for the "Clustering Network" between the 2 IPSO appliances.&amp;nbsp; This was actually a distinct and separate interface from the Check Point state sync network.&amp;nbsp; If a direct cable was used for the Nokia Clustering Network and one of the members was powered off or the cable was unplugged, it would cause an traffic interruption as the cluster state bounced and reformed with one member.&amp;nbsp; Thus the use of a hub or switch to maintain link integrity (green light) on the NIC during such an event to prevent the bounce.&lt;/P&gt;
&lt;P&gt;As for ClusterXL, Check Point introduced something called "New HA" around version NG (R50), which is more or less what we still use today.&amp;nbsp; The original HA implementation was renamed "Legacy HA" and still available for a few versions after that.&amp;nbsp; The "Legacy HA" code did require the use of the hub or switch for the sync network to avoid a similar traffic interruption caused by a cluster bounce if link state was lost on the sync interface.&amp;nbsp; However "New HA" changed how it dealt with failures specifically on the sync interface.&amp;nbsp; I'm not completely sure about this, but I think that when a cluster member detects a sync interface failure, under New HA it waits ~2.5 seconds (the ClusterXL dead timer) to decide what to do before possibly changing state.&amp;nbsp; If that member is currently active and detects a sync interface failure (especially due to a dead or flapping peer)&amp;nbsp; it remains active without any bouncing.&lt;/P&gt;
&lt;P&gt;When Legacy HA would detect a sync interface failure, based on what it knew from the last CCP update from the peer, it would assume that the peer was in a "better" state and immediately go standby.&amp;nbsp; If the sync failure was due to the other member dying or otherwise going away, it would take the member that just went standby ~2.5 seconds (the ClusterXL dead timer) to realize it was the only surviving member of the cluster, and return to active state (i.e. "bounce").&amp;nbsp; Needless to say if the peer member had a hardware or severe operating system problem and was constantly flapping the sync interface, this would lead to numerous cluster bounce events that were definitely noticeable.&lt;/P&gt;
&lt;P&gt;As far as bandwidth for the sync interface, 1Gbps should really be enough.&amp;nbsp; If you are having sync interface issues I doubt it is because the members are saturating a 1Gbps interface with delta state sync updates.&lt;/P&gt;</description>
      <pubDate>Mon, 25 Jan 2021 19:59:41 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/ClusterXl-synchronization-network-bandwidth/m-p/108797#M20700</guid>
      <dc:creator>Timothy_Hall</dc:creator>
      <dc:date>2021-01-25T19:59:41Z</dc:date>
    </item>
    <item>
      <title>Re: ClusterXl synchronization network bandwidth</title>
      <link>https://community.checkpoint.com/t5/General-Topics/ClusterXl-synchronization-network-bandwidth/m-p/108798#M20701</link>
      <description>&lt;P&gt;With new-mode HA, a member whose sync interface goes down still checks to see if it's the healthiest member of the cluster and may decide to go down to prevent contention for the cluster VIPs. Specifically, if you have a cluster interface which doesn't have any other devices on it which respond to ping, the cluster member (even if it was active at the time of the sync failure) will assume it has a problem.&lt;/P&gt;
&lt;P&gt;And agreed, saturating sync throughput is not a common cause of sync issues. The big benefit of bonded sync is fault tolerance, not throughput. 1g of sync throughput should be enough for half a million connections per second, no problem. Probably much more. Sure, some environments see connection volumes that high, but not many.&lt;/P&gt;</description>
      <pubDate>Mon, 25 Jan 2021 20:12:33 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/ClusterXl-synchronization-network-bandwidth/m-p/108798#M20701</guid>
      <dc:creator>Bob_Zimmerman</dc:creator>
      <dc:date>2021-01-25T20:12:33Z</dc:date>
    </item>
    <item>
      <title>Re: ClusterXl synchronization network bandwidth</title>
      <link>https://community.checkpoint.com/t5/General-Topics/ClusterXl-synchronization-network-bandwidth/m-p/108805#M20704</link>
      <description>&lt;P&gt;In the past, we used to say "use the largest interface type used for a sync interface" (meaning if you were using 1gb data interfaces, use 1gb sync interfaces too).&lt;BR /&gt;I believe the current guidance on sync traffic is 2GB max.&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 25 Jan 2021 22:46:24 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/ClusterXl-synchronization-network-bandwidth/m-p/108805#M20704</guid>
      <dc:creator>PhoneBoy</dc:creator>
      <dc:date>2021-01-25T22:46:24Z</dc:date>
    </item>
    <item>
      <title>Re: ClusterXl synchronization network bandwidth</title>
      <link>https://community.checkpoint.com/t5/General-Topics/ClusterXl-synchronization-network-bandwidth/m-p/108806#M20705</link>
      <description>&lt;P&gt;LACP, Do Not imply that you using a single switch.&lt;/P&gt;&lt;P&gt;LACP could be used to a switch stack&lt;BR /&gt;Also most datacenter switches uses functions such as VPC (Cisco virtual port channel) allowing you to build LACP to diff physical switches that are within a VPC pair.&lt;/P&gt;</description>
      <pubDate>Mon, 25 Jan 2021 23:13:06 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/ClusterXl-synchronization-network-bandwidth/m-p/108806#M20705</guid>
      <dc:creator>Magnus-Holmberg</dc:creator>
      <dc:date>2021-01-25T23:13:06Z</dc:date>
    </item>
  </channel>
</rss>

