- CheckMates
- :
- Products
- :
- Quantum
- :
- Security Gateways
- :
- CP Firewall Migration (From CP 23500 R81.10 To CP ...
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Are you a member of CheckMates?
×- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
CP Firewall Migration (From CP 23500 R81.10 To CP 19200 R81.20)
Hi All,
Currently we are functioning CP 23500 (Cluster HA) with R81.10 and Mgmt (Open Server - R81.20).
We have just purchased CP 19200 to replace CP 23500. However, CP 19200 boxes are coming with R81.20. Seeking guideline to Migrate Firewall Cluster Environment with minimum system interruptions.
Is it possible to build cluster with different OS level and different hardware?
Also, we have planned to maintain same Management IP for the new FW cluster.
Waiting for your guideline for the same..!
Thank you
- Labels:
-
Appliance
-
ClusterXL
-
Gaia
-
Open Server
-
Routing
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
If the management interface is not passing production traffic, I would recommend unplugging the mgmt interfaces on the old cluster members, building up the new ones with just the mgmt interfaces cabled, then when they're all up and running and SIC'd to management with the policy installed, move the other interface cables over in an outage window.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
If the management interface is not passing production traffic, I would recommend unplugging the mgmt interfaces on the old cluster members, building up the new ones with just the mgmt interfaces cabled, then when they're all up and running and SIC'd to management with the policy installed, move the other interface cables over in an outage window.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I second this recommendation. You need to set up the new cluster in the lab, and them swap the old and new appliances during a service window. Ideally, with unplugging just the MGNT interfaces from the old cluster, you should be able to establish SIC and push policy to the new cluster, while it is still disconnected from the production networks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Emmap,
Many thankx for the comments. Once we unplug the mgmt cables from the old cluster members, will smart console allow us to add another cluster with same mgmt ip addresses..!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Rather than add a new cluster, you can reuse the old one and just make any interface edits that need making and re-SIC the gateway objects in SmartConsole to the new hardware.
If you would prefer not to do that and set up a new cluster side-by-side, you should allocate new management IPs and new names to the new cluster.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Building a cluster with different hardware and OS is not supported. The number of cores is not the same and new appliance may have different physical interfaces.
As mentioned, if the management interface is not passing production traffic you could create a new cluster with new management IP's and prepare everything for the migration.
If management interface IS passing production traffic, you could create a new cluster with new management IP's, but don't configure a cluster address yet. When you are ready to migrate, just configure the management interface on the new cluster as a cluster interface.
If you management server is VMWare, you can built a whole new setup in parallel using a dummy VLAN to connect new cluster and management server.
As you can see, different procedures to migrate a cluster. But all of them will impact traffic.
If VPN is enabled on current cluster, be carefull enabling it on the new cluster when managed by the same management server. Two gateways with the same IP-addresses and managed by the same management server will break down VPN when policy is installed on both cluster. Please see:
Solved: Re: Two ClustersVSX withing same mgmt and VSs havi... - Check Point CheckMates
Good luck.
Martijn
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
With equal or higher cores you can build a (not supported but working) cluster with different HW
Also active standby is possible, unless you face processes like hyperflow who cannot sync and getting an active/down... anyway sessions are sync so you can run a cpstop or unplug a cable to got a transparent failover
