- CheckMates
- :
- Products
- :
- Quantum
- :
- Security Gateways
- :
- Re: Failover between different HW with cphacu
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Are you a member of CheckMates?
×- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Failover between different HW with cphacu
Hi wonderful checkmates!
I got a quick question for you:
I just want to do a zero downtime upgrade.
I’m upgrading R77.20 4400 to 5600 brand new appliances with R80.30.
Do you think with different HW the cluster will be in Active/Down and cphacu start will work?
I’ve never tried it before but I think with the same CoreXL instances it will work.
D!Z
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
It definitely won't work because the 4400 and 5600 have a different number of cores.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @PhoneBoy ,
I know that cluster will be in Active/Down.
What I'm asking here to be specific:
If I properly set SAME EXACT number of CoreXL instances on both members via cpconfig (even if at HW level there are different phisical CPU Cores), do you think cphacu start command won't work properly?
D!Z
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
That won't work.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
You could involve TAC and let them research for a solution - but as we all know, this is not supported...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Connectivity upgrade should work according to this
https://sc1.checkpoint.com/documents/Best_Practices/Cluster_Connectivity_Upgrade/html_frameset.htm
Ideally test the process in the lab 🙂
Might be tricky though to push policy middle of upgrade if you are changing interface names as then you need to re-do topology and spoofing.
In those cases, I normally pre-build new boxes in the lab and install latest policies using lab Mgmt server. Then you do "hard" failover by connecting one of new firewalls instead of existing standby and then doing cpstop on old active. Then add the other new one and once both are running, establish SIC and update cluster object relevant parts on production Mgmt server.
You will loose 1-3 pings and rest of the connections will have to be re-established of course. So it's not a zero downtime.
If you're not changing interface names, try the process from the document, it should work
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Guys,
Thank you all for your feedback.
In this case I will provide a downtime but it is very minimal downtime and it gives me the opportunity to roll back immediately on old cluster member.
Let me explain and share with the community, and also let me know guys what do you think about it:
1. Disconnect R77.20 standby cluster member
2. Connect R80.30 new member with ONLY interface that leads to Mgmt Server
3. Change cluster version, fix cluster member topology, install policy (remove flag "if fails")
4. Disconnect R77.20 (DOWNTIME)
5. Quickly Connect all the remaing interface on the R80.30 (check arp table and clear them if needed on network equipment/routers connected to R80.30 gateway)
6. Verify that everything is working fine -- END PROCEDURE
**In case there are issues you can switch back to the R77.20 cluster member
I know this will interrupt all the connections and the customer has to agree on this one.
At least you will have the ability to quickly switch from one node with old SW version to the new one and viceversa.
This is basically the same procedure I have applied with cphacu in other situation.
In this case we cannot use it as we already discussed it, and so I think this is the only way to do it.
If someone has other idea/solution it will be helpful.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Its actually should be safe to connect all interfaces (at your own risk) the logic should keep the member with higher version in READY state not active until you stop or disconnect R77.20 member. You should be able to test all steps in VM lab if you have one
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
4400 -> 2 Cores -> Intel Celeron Dual-Core E3400 2.6 GHz
5600 -> 4 Cores -> Intel Core i5-4590S, 3.00GHz (Quad-Core)
If you want to use 4 cores on the 5600 appliance, there is no way to change the systems without losing sessions.
More read here: R80.x - cheat sheet - ClusterXL
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
When you start, the systems should have the following status:
[4400 A] -> active
[4400 B] -> standby
1. [4400 B] Poweroff the R77.20 the standby cluster member (4400 B)
2. [5600 B] Connect to R80.30 new member and configure interfaces and routes,... with the same settings from the old [4400 B].
3. Install SIC, add license, change cluster version, fix cluster member topology, install policy on gateway [5600 B] (remove flag "if fails")
Note: The member with the lower CCP version (GAIA version) remains active [4400 A].
4. [4400 A] Poweroff the R77.20 appliance (4400 A)
Note: Now you're losing all your sessions and the [5600 B] should become active.
5. If possible delete all ARP entries on all participating routers in real time.
6. (5600 A) Connect to R80.30 new second member and configure interfaces and routes,... with the same settings from the old [4400 A]
7. Install SIC, add license, fix cluster member topology, install policy on both new gateways (add flag "if fails")
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
NICE @HeikoAnkenbrand
This was exactly my idea with a more detailed explanation!!!
I'm happy that you agreed on this one!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Always a SNAP of Mgmt ALWAYS! haha 😄