Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Daniel_Cimpeanu
Collaborator
Jump to solution

Quantum Spark HA migration to 3600 HA

Hi,

I'm looking for advice regarding a migration from Quantum Spark HA cluster to a 3600-based cluster, where onsite access is limited and I'm forced to reuse the same IP addresses for the cluster. The Spark appliances are running on access ports through the switch in front of them, whereas for the 3600s I'm going for LACP trunks for both LAN and WAN through available interfaces on the same switch.

My approach was to configure the two 3600 appliances to mirror to some extent the Spark setup, taking into consideration the physical connections, vlans and IP addressing.

I closed down the switch interfaces for one of the Spark appliance and attempted to set up the corresponding 3600 member of the new cluster - in essence having fw1 active on 3600 cluster and fw2 active on the Spark cluster. Needless to say, things went sideways.

The problem I'm having is that the Spark cluster is my only Internet access to the site and thus the switch that controls the connections.

What would be an acceptable way to get through with such a migration given that I cannot have the site fully offline?  

 

Thanks,
Daniel

 

0 Kudos
1 Solution

Accepted Solutions
Jim_Holmes
Employee
Employee

Without console access, you are going to have problems. Will you have at least remote power access to restart stuff? Smart-hands that plug a laptop into the serial port?

***** Disclaimer --- This is off the top of my head and should be reviewed with your SE and/or TAC

Forgetting the above...

Assumptions:

- Your ISP is giving you only one external IP address and will not float you an additional one to cut over.

- You need/prefer to keep internal routes the same

- You have all the layer-2 networking happy

What I would do

- Configure all external interfaces on the 3600s with RFC1918 address (not the same as the SMB boxes)
- Configure all of the internal interfaces with appropriate addresses (not the same as the SMB boxes)
- Configure the internal interfaces with a VIP to test connectivity to the 3600s
- Turn off all antispoofing for the cutover
- Manually update the VIPs to the correct and remove them from the SMB boxes
- Adjust policy to ensure you can get to the VIPs and interfaces for all involved gateways
*** This is the most likely place where you will break things ***
- Install Database
- Push policy
- Test per predetermined test plan
- If things look right but are not working, clear ARP cashes on network devices (Ciscos are known for long ARP timeouts.)
- Test again

Works
|
|--Yes, go home and have a beer
|
--No, revert, go home and have two beers

***** Disclaimer --- This is off the top of my head and should be reviewed with your SE and/or TAC

Aka, Chillyjim

View solution in original post

8 Replies
_Val_
Admin
Admin

It does not matter how you play it, when switching from one cluster to another, there will be a short downtime when you move to the new setup.

Do it after hours.

0 Kudos
Daniel_Cimpeanu
Collaborator

That’s what I’m trying to avoid, simply because I don’t have physical acces to the location and devices. I’d gladly accept the downtime and it would be somewhat straightforward if I could get console access to the switch and appliances, but I am forced to rely on the active connection though the firewalls, be them either the Quantum Spark or the 3600s. And I unfortunately don’t have a spare available public IP either to use temporarily on the cluster - therefore the challenge 😕 

0 Kudos
_Val_
Admin
Admin

Some more details are needed. Is that a remote site with only internet access provided by those FWs? Which IPs are you using to install policies on those appliances?

 

0 Kudos
Daniel_Cimpeanu
Collaborator

It's indeed a remote site managed through the firewall cluster's public IPs, with Internet access currently through the Quantum Spark cluster I need to replace.

0 Kudos
Chris_Atkinson
Employee Employee
Employee

Apologies if I have misread your summary...

Note different models cannot be used to form a cluster, moreover these operate a different operating system.

Hence this type of migration will involve down time. Are the appliances all centrally managed?

CCSM R77/R80/ELITE
0 Kudos
Daniel_Cimpeanu
Collaborator

Hi Chris,
I’m not trying to mix and match, but rather replace a Quantum Spark cluster with a 3600 cluster.

All appliances are centrally managed.

 

0 Kudos
Jim_Holmes
Employee
Employee

Without console access, you are going to have problems. Will you have at least remote power access to restart stuff? Smart-hands that plug a laptop into the serial port?

***** Disclaimer --- This is off the top of my head and should be reviewed with your SE and/or TAC

Forgetting the above...

Assumptions:

- Your ISP is giving you only one external IP address and will not float you an additional one to cut over.

- You need/prefer to keep internal routes the same

- You have all the layer-2 networking happy

What I would do

- Configure all external interfaces on the 3600s with RFC1918 address (not the same as the SMB boxes)
- Configure all of the internal interfaces with appropriate addresses (not the same as the SMB boxes)
- Configure the internal interfaces with a VIP to test connectivity to the 3600s
- Turn off all antispoofing for the cutover
- Manually update the VIPs to the correct and remove them from the SMB boxes
- Adjust policy to ensure you can get to the VIPs and interfaces for all involved gateways
*** This is the most likely place where you will break things ***
- Install Database
- Push policy
- Test per predetermined test plan
- If things look right but are not working, clear ARP cashes on network devices (Ciscos are known for long ARP timeouts.)
- Test again

Works
|
|--Yes, go home and have a beer
|
--No, revert, go home and have two beers

***** Disclaimer --- This is off the top of my head and should be reviewed with your SE and/or TAC

Aka, Chillyjim
Daniel_Cimpeanu
Collaborator

Thanks for the hints, I managed to get the new cluster installed by using an external machine as a jump onto the core witch. Had the switch interfaces closed for the old cluster, flushed the arp table, waited for a bit and enabled the interfaces towards the new cluster. There was a bit of downtime, but it was at least done out of office hours. 

0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events