Hello Danny,
in preparation of our upcoming migration from R77.30 to R80.20 I followed your steps.
Step1: I installed a fresh SMS R80.20 with a new IP in the same subnet as the old one and moved the configuration from the old management server manually, including obejects and rule base.
Because I was moving the objects and rule base on my own I was able to clean it up and consolidate some rules.
Step2: I created the new cluster object in R80.20 with the same configuration like on the old mgmt server.
Step3: I've resetted the SIC on the standby node and established the trust with the new mgmt. As the new policy was installed I did the same with the former active node. Both Nodes of the cluster was connected to the new management server R80.20
Step4: I installed the policy on both nodes an verified it with "fw stat".
After pushing the policy a couple of network problems reached me.
Problem 1
The cluster was not build. One member had the active role and the other one was in ready state and told me that the configuration was not identical. I guess the following message belongs to the issue but i was not able to solve it. I checked both Nodes and they have the same cluster ID 254. I'm not if the configuration for the cluster object in SMS is proper to the nodes regarding the Cluster ID.
[fw4_0];CPHA: Found another machine with same cluster ID. There is probably another cluster connected to the same switch/hub as this one.
[fw4_0];CPHA: This is an illegal configuration. Each cluster should be connected to another set of switches/hubs.
routed[22044]: api_get_member_info: failed to get memberinfo. selector = 83147e1, data = 0
Problem 2
A couple of netwerk services wasn't permanently reachable for example ldap. I guess it could be related to Problem 1.
Problem 3
Our SAP Portal was not reachable with the web browser. The firewall log shows that the communication was allowed but with tcp dump I saw that the connection is reseted from the destination since installing the new policy.
Maybe it is related to Problem 1 too but I can't say why. Another idea is that the default threat prevention policy is responsible for it. Under "Manage Policy and Layers" i saw, that Threat prevention is also pushed to the cluster. Because I don't checked "Threat Prevention Features" for the cluster object, I'm not sure if pushing threat prevention policy to the nodes has an effect.
Problem 4
Our two VPN tunnels wasn't working after pushing the policy. I guess because the service "tunnel_test" was blocked by the firewall which I fixed already in the new policy. When moving the cluster to new management again I will see if it was the reason.
Maybe you are familiar with some of that issues?
I hope that some of them belong to the cluster which was not build correctly in my opinion.
I reverted to the snapshots I took before and than all issues was gone. Next time I hope to be better prepeared about those issues.
Best regards
Martin