- CheckMates
- :
- Products
- :
- Quantum
- :
- Management
- :
- Re: Best way to migrate Cluster to new mgmt
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Are you a member of CheckMates?
×- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Best way to migrate Cluster to new mgmt
Hello, I would like to know if there is an official procedure to change a cluster to other management server without loosing its current policies and config?
I have two cluster with two diferent managament server but we need to have only one management server controling these two cluster, the management server is r80.10.
the are two management server A and B, each one have a cluster of gateways which are C and B.
I need to migrate the cluster B to be managed in Manamenet server A without loosing the current configuration, so I understand I need to migrate the config (policies, host, netwoks, services) of management server B to A so I could install policies again to the gateway and starts to operate normally.
if there is not an offcial procedure for this, whats is the recomended option?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I'm curious to hear what folks may suggest for this one. Are you trying to accomplish this with no outage? Or are you willing to accept some downtime?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Daniel, Yes, I am willing to accept some downtime!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Are you using MDS or only SMS ?
In case you want to migrate both clusters to the MDS, the best way would be to create separate CMA for each cluster. Using migrate export/import tool and job done
Jozko Mrkvicka
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Jozko, I am using SMS.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Step 1: ExportImportPolicypackage - sk120342 (formerly cp_merge) the management configuration from management server B to A or do it manually if it isn't too much configuration / rules / objects
Step 2: Create a new cluster object for cluster B on management server A, check all settings for logging settings, nat, policy installation target etc. related to that cluster object (compare with the configuration on management server B)
Step 3: Reset SIC on cluster B (sk86521, sk65764) and re-establish with management server A
Step 4: Install security policy, check operation status, done
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Danny,
Yes, I tried that on a lab environment but I have a problem, when I import the policy package to the destination management server thair appear 1 cluster object, and 1 gateway objetc for each site to site VPN in the "GW and server tab".
the problem with this is that I cant delete this objects and I can't install policies because that objects are having some issues like "the cluster object is empty" and "there is not sync with the gateway objects" (I mean the new ones, those that appear with the imported policie package).
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I understand. Then delete a new cluster object manually as I outlined in Step 2 and use this one for policy installation. Delete the one that was create during the import.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I cant delete those objects, it do not allow me that. if I could delete the object then I would have no issues xD
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Delete on managenent server B before doing the export.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Danny,
in preparation of our upcoming migration from R77.30 to R80.20 I followed your steps.
Step1: I installed a fresh SMS R80.20 with a new IP in the same subnet as the old one and moved the configuration from the old management server manually, including obejects and rule base.
Because I was moving the objects and rule base on my own I was able to clean it up and consolidate some rules.
Step2: I created the new cluster object in R80.20 with the same configuration like on the old mgmt server.
Step3: I've resetted the SIC on the standby node and established the trust with the new mgmt. As the new policy was installed I did the same with the former active node. Both Nodes of the cluster was connected to the new management server R80.20
Step4: I installed the policy on both nodes an verified it with "fw stat".
After pushing the policy a couple of network problems reached me.
Problem 1
The cluster was not build. One member had the active role and the other one was in ready state and told me that the configuration was not identical. I guess the following message belongs to the issue but i was not able to solve it. I checked both Nodes and they have the same cluster ID 254. I'm not if the configuration for the cluster object in SMS is proper to the nodes regarding the Cluster ID.
[fw4_0];CPHA: Found another machine with same cluster ID. There is probably another cluster connected to the same switch/hub as this one.
[fw4_0];CPHA: This is an illegal configuration. Each cluster should be connected to another set of switches/hubs.
routed[22044]: api_get_member_info: failed to get memberinfo. selector = 83147e1, data = 0
Problem 2
A couple of netwerk services wasn't permanently reachable for example ldap. I guess it could be related to Problem 1.
Problem 3
Our SAP Portal was not reachable with the web browser. The firewall log shows that the communication was allowed but with tcp dump I saw that the connection is reseted from the destination since installing the new policy.
Maybe it is related to Problem 1 too but I can't say why. Another idea is that the default threat prevention policy is responsible for it. Under "Manage Policy and Layers" i saw, that Threat prevention is also pushed to the cluster. Because I don't checked "Threat Prevention Features" for the cluster object, I'm not sure if pushing threat prevention policy to the nodes has an effect.
Problem 4
Our two VPN tunnels wasn't working after pushing the policy. I guess because the service "tunnel_test" was blocked by the firewall which I fixed already in the new policy. When moving the cluster to new management again I will see if it was the reason.
Maybe you are familiar with some of that issues?
I hope that some of them belong to the cluster which was not build correctly in my opinion.
I reverted to the snapshots I took before and than all issues was gone. Next time I hope to be better prepeared about those issues.
Best regards
Martin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Martin ,
Have you tried reboot the second member that you added to the new Mgmt .
I am guessing adding the standby member cased the network to see it as a new CXL master member ( so you had both members in Active for a short time )
and on this point those messages where seen ,
[fw4_0];CPHA: Found another machine with same cluster ID. There is probably another cluster connected to the same switch/hub as this one.
[fw4_0];CPHA: This is an illegal configuration. Each cluster should be connected to another set of switches/hubs.
now its sound like there was a problem on the second member that you added i would sagest to try reboot on that member .
before adding a second member to the cluster in that method cphastop is been used to prevent split bairn
Thanks
Roy
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
thanks for your reply.
I'm not sure anymore if I rebooted the second member that I've moved to the R80.20 Mgmt because after moving to new management it was in active state and that was fine to me.
I already mentioned that i had a general problem in my way to move the nodes because I forget to check that the node i moved first is the active one. As I reseted the SIC on the second node before moveing, I created a downtime because of the default policy which was installed on the second node (state: active).
At the point as both nodes had an established trust to the new management I definitely rebooted the first moved member which was in READY state but it did not fixed the cluster.
Right now I guess my way of moving the nodes from one to the other mgmt caused the cluster issue.
At the next try I stop the HA on the second member befor moving.
Split brain is not funny at all 🙂
Regards
Martin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have a question to Step 3 after reading it careful again even when it's short.
Is it important to reset the SIC and establish the trust to the new MGMT R80.20 for all cluster nodes first before I install the policy and push the cluster information?
In my case I resetted the SIC for the standby node first, established the trust with new MGMT R80.20 and installed the new policy. So at this point there where two MGMTservers (1xR80.20, 1xR77.30), each responsible for one node (both R77.30).
Two nodes (READY/Active) with slightly different policies which probably don't work as a cluster as expected.
My aim was to move the nodes to new MGMT without a downtime but maybe this is the reason for my issues. I simply did it in a wrong way.
Regards
Martin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Martin,
1) you can not have a cluster with member SIC with management, both member will try to use the VIP address (you will have a duplicate IP)
2) I think the cluster is R77.30 as the "old" management, if yes please remember you have to use different cluster ID if two cluster are connected share at least 1 vlan
So please consider a maintenance windows because it is impossible not to have downtime.
Emanuele
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
cp_merge is no longer supported unfortunately:
"cp_merge" tool support on Security Management Server R80 and above
There is a python utility that allows you to perform these tasks, but I do not believe it is officially supported by Check Point either
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Vladimir,
Yes, I tried that on a lab environment but I have a problem, when I import the policy package to the destination management server thair appear 1 cluster object, and 1 gateway objetc for each site to site VPN in the "GW and server tab".
the problem with this is that I cant delete this objects and I can't install policies because that objects are having some issues like "the cluster object is empty" and "there is not sync with the gateway objects" (I mean the new ones, those that appear with the imported policie package).
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
you have to remove all the references in "Where Used", set a "fake" IP for the CP objects imported and remove the flag from VPN.
Usually this trick fix the error and allow the object deletion.
Manu
