Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Brad_Case
Explorer

VSX VSLS versus

Hey Guys,

This is a general question in regards to VSX deployment with VSLS versus VSX  deployment with HA,

If I have VSX deployed in a cluster of 4 gateways running in HA mode,would there be 1 member holding all the active VSI's, and the other three members in the Standby state?  Will the other three members be in an actual standby state actively receiving Sync traffic? 

This is noted in the admin guide for R80.10:

When the convert_cluster command finishes, there should be only one active member on which all Virtual Systems are in the active state, and one standby member on which all Virtual Devices are in the standby state. Any additional members should be in standby mode and their Virtual Devices in the down state.

This makes me think that there is only really one standby gateway in a cluster that consists of more than 2 gateways.

11 Replies
XBensemhoun
Employee
Employee

In HA mode, there is only one active node in which every VS are in active at any given time.

Other VS should have a standby state in all other VSX Gateways.

In VSLS, every VS are devided between each VSX Gateways; and each of them are: active on one VSX Gateway, standby in a second VSX Gateway (related to the priority) and will be backup state on all others VSX Gateways as it's explain in the following schema:

(picture from the Normalized VSLS Deployment Scenario chapter of the R80.10 VSX Admin Guide).

Note that in the sk81300 What is the maximum number of members supported in Check Point cluster? we understand that:

the recommended maximum number of nodes is 4 regardless of the clustering mode used.

but

VSX VSLS uses a more efficient state synchronization method and can safely support more than 4 nodes in a cluster

Information Security enthusiast, CISSP, CCSP
Brad_Case
Explorer

Great.. Thanks for the clarification... In regards to VSLS where you have 4 VSX gateways in a cluster, in which case you would have 2 VSX gateways in backup state In a failure event when the active VSX gateway goes down for a virtual firewall, can you control which of the VSX gateways will become the next in line for being in standby state?  The R80.10 VSX admin guide seems to imply that you can, but I've been told there are only three priorities you can define. 

0 Kudos
_Val_
Admin
Admin

Yes you can. There are multiple mechanisms available under vsx_util vsls CLI. You can set up automatic "balancing" of VSs amoung the cluster members based on weights, priorities, or stick them manually.

Another note, I do not see any valuable reason to run VSX cluster in a HA mode. VSLS is the best practice. 

0 Kudos
Vladimir
Champion
Champion

Valeri,

Can you point me to the VSLS as a best practice SK?

I've just tried to explain to one of my clients the opposite: as having a two node VSLS in the same datacenter prevents you from using VRs and requires you to pass traffic through the external switches between members that are being balanced, if hosts behind balanced VS' connected to the same VSwitch have to talk to each other.

Also, can you chime in on a subject of having a VSwitch sandwiched between single VS and external interface? Since switches are active/active, would this somehow improve the failover speed?

Thank you,

Vladimir

0 Kudos
_Val_
Admin
Admin

I do not think we have such SK, but VSX admin guide should suffice. Since VSLS came into picture, there is no point to use VRs for sharing physical interfaces.

However, with virtual switches, it is a different story. The most common deployment method is to share the same broadcast domain for many VSs, and the ONLY supported way to have that is through a virtual switch. 

It does not change failover time, which is very quick in any case. 

0 Kudos
Vladimir
Champion
Champion

In this case, new VS must be connected to the dedicated subinterface of the bond on one side and to a dedicated bond on the other. Does having a VSwitch between VS and the subinterface add anything useful to the picture in terms of performance or failover speed or is it completely unnecessary? 

0 Kudos
_Val_
Admin
Admin

Virtual Switches are only required when you are sharing a physical interface (or a bond with several physical interfaces) between multiple VSs. If a VS is connected to its own dedicated interface, and there are no plans to connect any other VS to the same broadcast domain, you do not need a virtual switch. 

0 Kudos
phlrnnr
Advisor

Although, if your plans change in the future, and you do need to connect 2 VS to the same broadcast domain, then you'll have to take a downtime to insert the virtual switch.  Therefore, I make the recommendation to always connect the VS to a virtual switch and not directly to the physical interface.  The only caveat I've found to this is that to sniff with tcpdump on the physical interface, you need to run tcpdump from the virtual switch context and not the VS itself.

0 Kudos
Gert_Vossius
Participant

Hi Xavier,

are these numbers (supported VSX/VSLS gateways in a clusters) also applied if the VSX gateways are running on a Scalable Platform like a 64k chassis?

Thanks!

0 Kudos
PhoneBoy
Admin
Admin

Scalable Platforms don't currently allow more than 2 members in a cluster.

_Val_
Admin
Admin

Scalable have additional layer of load balancing among SGMs. As Dameon said, at this point in time you can only use two chassis in a cluster, but even with that, multiple SGMs are available for balancing the load on each.

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events