Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Mk_83
Collaborator

Seeking Guidance on Adding 9400 Appliance to Existing 15400 VSX Cluster

Hello everyone,

I’d like to ask for your advice regarding a VSX-related matter.

Currently, I’m working on a case that involves deploying a VSX cluster between an existing pair of 15400 appliances and an additional 9400 appliance. The system is already running a VSX cluster on the two 15400s. However, due to a requirement to move pair of the 15400 appliances to another site, and to avoid downtime we are planning to add the 9400 to the same VSX cluster and let the 9400 runs instead.

From what I’ve researched, one of the requirements for a VSX cluster deployment is that the data interfaces must have identical names across all participating devices.

Here’s the situation:

  • On the 15400, the interface names follow the format: eth1-01, eth1-02, ...

  • On the 9400, the interfaces are named: eth1, eth2, ...

This difference causes the 9400 to be unable to recognize the existing interface configuration from the 15400 appliances during VSX configuration.


My proposed solution:

I am considering purchasing an additional 10Gb expansion card for the 9400. Based on my research, the interfaces on the expansion module (slot 1) of the 9400 use the naming convention eth1-01, eth1-02, which would match the interface names of the 15400, making them compatible for VSX clustering.

That said, I haven’t implemented this type of setup before, so I’m not entirely certain whether this approach will work.

Have you ever encountered a similar scenario? If so, I’d really appreciate it if you could share your experience or suggest the best course of action for this situation.

Thank you all very much for your support!

0 Kudos
5 Replies
Bob_Zimmerman
MVP Gold
MVP Gold

This combination of boxes isn't supported, and a hard outage is unavoidable. That said, the outage could probably be limited to <30 minutes.

The interface naming limitation is the big reason why VSX deployments should have everything in bonds, even if the bonds only have one member. It's easy to move a bond from physical interface to physical interface.

How much time do you have to do this? If you're able to do several windows, I would highly recommend moving the interfaces to bonds first using 'vsx_util change_interfaces'. This will also involve a hard outage for the traffic going through each interface, since you can't add an interface to a bond when it has other configuration. You would build the bond with a dummy interface, swap them through vsx_util, add the now-unused old interface to the bond, then remove the dummy interface from the bond.

Once everything is on bonds, you could build a new box with the same number of bonds arranged any way you want, then use vsx_util reconfigure to have it replace one of the old members. Again, there will be a hard outage when you fail traffic from the last 15400 to the first 9400.

Mk_83
Collaborator

Many thanks for your information!! It has really helped me understand the situation better.

Would you happen to have any official documentation that explicitly states "This combination of boxes isn't supported" for a VSX cluster setup? It would be extremely helpful to have a reference document to confirm this limitation, as it would make it easier for us to communicate and justify the restriction internally.

I did search through the available resources but could only find sk162373, which refers to Maestro and not specifically to VSX Cluster.

If there's any document or SK that clearly defines the hardware compatibility limitations for VSX clusters, I’d greatly appreciate it if you could share it with me.

0 Kudos
Bob_Zimmerman
MVP Gold
MVP Gold

It's more that Check Point only supports any kind of clustering between identical boxes. For the most part, they don't document configurations which they don't support, as there are too many of them.

The real limitation is the CoreXL topology, which is difficult to fully control on VSX. Systems with different CoreXL topologies cannot synchronize.

0 Kudos
Wolfgang
MVP Gold
MVP Gold

A VSX cluster is based on ClusterXL….

see Hardware Requirements for Cluster Members 

Hardware Requirements for Cluster Members

ClusterXL operation completely relies on internal timers and calculation of internal timeouts, which are based on hardware clock ticks.

Therefore, in order to avoid unexpected behavior, ClusterXL is supported only between machines with identical CPU characteristics.

Wolfgang_0-1750794715989.png

 

Best Practice - To avoid unexpected fail-overs due to issues with CCP packets on cluster interfaces, we strongly recommend to pair only identical physical interfaces as cluster interfaces - even when connecting the Cluster Members via a switch.

For example:

  • Intel 82598EB on Member_A with Intel 82598EB on Member_B

  • Broadcom NeXtreme on Member_A with Broadcom NeXtreme on Member_B

0 Kudos
Chris_Atkinson
MVP Gold CHKP MVP Gold CHKP
MVP Gold CHKP

Note a similar discussion was recently answered here:  
https://community.checkpoint.com/t5/Security-Gateways/Hardware-Compatibility-for-Adding-a-gateway-to...

CCSM R77/R80/ELITE
0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events