Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
sphere
Explorer

Migrating cluster to VSX

Hi all,

This community helped me a lot in previous times but now I'm faced with an issue that I didn't find an answer.

I have a pair of CP15400 running in a clusterXL in front of a data center. That DC was supposed to be for a single client which has a huge server/storage resources

Now, the client decided to rent a part of that computer power 🙂

From the moment they expressed their willingness to rent a part of their environment we noticed a design flaw in CheckPoint deployment - we don't have any means for segregation network.

My question is - did anybody experienced the scenario where they migrated fully operational cluster to VSX? They don't have new boxes for this, rather they want to use current ones (and they already bought licenses for it). They are prepared for downtime but the question is how to do it with minimal impact?

 

Any suggestion/idea is greatly appreciated!

Cheers,

Travis

 

0 Kudos
5 Replies
Maarten_Sjouw
Champion
Champion

Travis,

There are a number of things that you need to to be aware of:

  • When moving to VSX there will be downtime for the conversion of the clusterobject.
  • When you need to be able to manage multiple customers on 1 piece of hardware your management is best done on a Multi Domain management server
    • You separate the underlying hardware from the customer environment in 1 domain
    • You create a domain per customer to keep all their network data and rules separated and hidden from each other
    • You only allow each customer a access (if needed) to their domain
  • Per customer you create a VS in their domain with all interfaces and routes
    • You can use the vsx provisioning tool to make sure this does not take to long, this will take some proper preparation but will save you a lot of clicking in the maintenance window
  • When you do have some other hardware available you can use that to prepare everything as you cannot create a VSX cluster without setting the SIC in the process
    • In the maintenance window you could just bring down those temp boxes and then use a vsx_util reconfigure to apply that to the 15600's
      • One remark here, try to find something with the same interface naming, otherwise there are ways to rename interfaces, but that is really stretching things.
    • Before running the reconfigure you need to make sure on the 15600's the interfaces that will be used for the VS's are cleaned of their config, no IP addresses nor vlan's should remain.

Hope this helps you on your way?

 

Regards, Maarten
(1)
sphere
Explorer

Hi Maarten,

 

 Yeah, it helps!

Unfortunately - multidomain is not an option right now. Maybe in Q3.

Either way - management of that VSX will be done by one external person/company - end-users will not have access to it (DC owner is gonna become something like managed provider).

Idea is to have end-users separated on the Nexus side each in their own vrf and gateway for that vrf would be VS on the CP's.

One thing that I'm struggling with is this - how can I migrate current configuration from the running cluster (policies/objects/ip configuration/routes etc) onto a newly created VS?

Something like - export database and then import it to the new VS.

 

Once again - thanks a lot for your ideas!

 

Cheers,

Travis

 

0 Kudos
Maarten_Sjouw
Champion
Champion

Is it a all-in-one solution atm? I hope you do have a separate management server? If not you have some additional work cut out for you, you need to export the database and import that into a new management server. Also there are a number of things that need to cleaned after that.
Regards, Maarten
0 Kudos
sphere
Explorer

Noup, we have a separate management server (VM).

 

As a potential solution, I was thinking of building temporary two new VMs (Gateways) and creating a new cluster under SMS.

Apply all the rules to those new VMs and basically divert aka free traffic from physical boxes.

That way, potentially I'll minimize downtime and have a few days to build VSX.

 

Does this make sense?

 

Cheers,

Travis

0 Kudos
Eoin_Quinn
Explorer

Hi Travis

 

Just wondering what approach you took in the end with this?

We're after coming across a similar situation with an existing clusterXL being split (also with a seperate management server) and we're trying to figure out the best approach to take.

Did you or anyone else figure a solution?

 

Thanks!

Eoin

0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    Tue 23 Apr 2024 @ 11:00 AM (EDT)

    East US: What's New in R82

    Thu 25 Apr 2024 @ 11:00 AM (SGT)

    APAC: CPX 2024 Recap

    Tue 30 Apr 2024 @ 03:00 PM (CDT)

    EMEA: CPX 2024 Recap

    Thu 02 May 2024 @ 11:00 AM (SGT)

    APAC: What's new in R82

    Tue 23 Apr 2024 @ 11:00 AM (EDT)

    East US: What's New in R82

    Thu 25 Apr 2024 @ 11:00 AM (SGT)

    APAC: CPX 2024 Recap

    Tue 30 Apr 2024 @ 03:00 PM (CDT)

    EMEA: CPX 2024 Recap

    Thu 02 May 2024 @ 11:00 AM (SGT)

    APAC: What's new in R82
    CheckMates Events