Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
cosmos
Contributor

Cluster migration: copy topology to new cluster

Admins please advise if this should be posted in the API board.

I'm migrating a cluster with about 60 logical interfaces across 12 physicals to new hardware. Where most of those trunks connect to the same switch fabric I'll be consolidating them to a 10G port channel, and will need to replicate the topology with anti-spoofing on the target cluster with different interface names.

The gateways are managed in-band on one of those logical interfaces and we'll be replicating the logical topo. Access to the DC where the appliances are installed is managed by a 3rd party and all cabling goes through a request process so a physical cable swap-out for migration is untenable. I'm pre-configuring management for the new members on the same VLAN and will be shipping the boxes to the DC along with those cabling requests, once everything is cabled up there will be no physical access so all migration tasks must be logical. Each member will use the same IPs as the old cluster members except for the in-band management network which allows us to stand the new cluster up in parallel.

My strategy is to pre-configure all sub-interfaces on the new members in the "off" state except for in-band management where I can use dedicated IPs, and build the new cluster from the existing mgmt server. Normally I would control interface migration at the switch by allowing the VLANs to the new cluster and removing them from the old, but the switch fabric (Cisco ACI) is also managed by a 3rd party so we would rather remove this dependency hence deploying the logical interfaces in the "off" state. 

Herein lies the challenge - in my lab if I fetch topology from the new cluster members, we only get the interfaces that are in the "on" state so 59 out of 60 are not fetched. I'd like to grab the topology from the old cluster incl. anti-spoofing configs and copy it to the target cluster with new interface names (e.g. eth1-02.2049 --> bond0.2049).

I've seen a few articles on replacing hardware and/or migrating to new interfaces, all requiring manual configuration of the target topology, so I'm looking for a way to automate this - so far I've been able to grab the current cluster config from the mgmt_cli in json format and looking to import/merge this to the new cluster once the edits are done - assuming the mgmt API will support this?

I've done a fair bit of this type of work in VSX, where the vsx_provisioning_tool has been a good friend allowing me to export a VS config and re-import with new topology - the only caveat is you can't use a group with exclusion in the interface topology. But a simple cluster not so much.

Thanks in advance 🙂

Matt

0 Kudos
8 Replies
_Val_
Admin
Admin

Run save configuration, grab the file, change physical interface designations in the config file (can be done by text global changes), if the new appliance is using different names, then paste config to CLISH on the new appliances, once you have MGMT interface connected and up. This worked miracles for me for many years.

It is much faster and easier than creating an API scripts, in my personal opinion.

0 Kudos
cosmos
Contributor

Hey Val, thanks for the tip. I've already dumped the new interface configs on the box using a sed script to change the interface names from original to new (e.g. eth1-02.2049 --> bond0.2049) and imported using load config (with clienv set to continue on failure for things like invalid descriptions for static-routes). So the target devices already have the interface config.

The problem I was facing was either get topology in the console only gets interfaces that are active/on, and 59 of 60 logical interfaces are in the off state - this is necessary because all the production VLANs are on the same physical port channel as in-band management and we don't have sufficient addressing to stand up temporary IPs while connected to the same switch fabric. We could use dummy IPs but that would defeat the purpose of pre-building the cluster.

Manually adding the topology is going to be very risky with copy/paste for each sub-interface and anti-spoofing groups so I wanted to script it like I have for VSX (easy with the provisioning tool).

I've managed to export the topo from mgmt_cli, if I can massage the exported JSON to match the target interface names (again using a sed script) surely I can POST it to the API to update the new cluster members:

  "uid" : "***",
  "name" : "old_cluster",
  "type" : "simple-cluster",
  "domain" : {
    "uid" : "***",
    "name" : "SMC User",
    "domain-type" : "domain"
  },
  "cluster-mode" : "cluster-xl-ha",
  "cluster-members" : [ {
    "name" : "DC1-FIR-PR-01",
    "sic-state" : "communicating",
    "sic-message" : "Trust established",
    "ip-address" : "10.0.1.254",
    "interfaces" : [ {
      "name" : "Sync",
      "ipv4-address" : "169.254.0.1",
      "ipv4-network-mask" : "255.255.255.240",
      "ipv4-mask-length" : 28,
      "ipv6-address" : ""
    }, {
      "name" : "eth1-02.400",
      "ipv4-address" : "192.168.249.148",
      "ipv4-network-mask" : "255.255.255.248",
      "ipv4-mask-length" : 29,
      "ipv6-address" : ""
    }, 
 

So I thought I would fire up Postman in my lab, it's much better at these things than mgmt_cli (I could use cURL if needed). I found the Postman collections for the version I'm running here and converted the collection to v2. This has simplified authentication and session management and allowed me to run some tests based on the collection samples (login, show simple-cluster, set simple-cluster, publish and task management).

I haven't imported the topology yet but it's looking promising - I'll post back here with my findings.

 

 
0 Kudos
cosmos
Contributor

This could be a problem tho - we only have 50 out of 74 interfaces exported from the API

  "interfaces" : {
    "total" : 74,
    "from" : 1,
    "to" : 50,

 

0 Kudos
PhoneBoy
Admin
Admin

It just requires two API calls, with the second one being identical except including the parameter offset 50.

0 Kudos
_Val_
Admin
Admin

Ok, so your problem is not the actual migration of the physical interfaces but topology definitions before the actual cluster is in production. I suggest going as is and updating topology once more, when the GWs are fully operational, as part of the migration procedure. This way it will be error-proof and less cumbersome. Also, no need for a script to set up topology on MGMT side. 

0 Kudos
cosmos
Contributor

Thanks Val, that would save a good chunk of manual definitions, still have to manually define 74 cluster IPs (not 60 like I originally thought).

While there's the initial hurdle to transform the json to an acceptable state, if I can get this working via the API I'll document the process if it will help others do the same 🙂

@PhoneBoy is that just { "limit-interfaces": 80 } in the request body?

0 Kudos
PhoneBoy
Admin
Admin

If you want to get the results in a reliable way, you will need to make two API calls (the second with a different offset value).
The reason these limits exist is that it allows the API server to operate in a performant way.
You can try and specify a higher limit, and, in some cases, it may work.
However, this is not guaranteed.

0 Kudos
_Val_
Admin
Admin

Only if you are creating a separate cluster object on the same management where the old cluster is being used. I would actually go a different way: re-use the old cluster object, reset and reinitialise SIC with the new appliances, pull topology, et voila. 

I do understand this is not the answer you are looking for 🙂 From where I stand personally, automation of the task only makes sense if you do those migrations in numbers. 
Simple cluster object APIs in R80.40 (API 1.6 and up) should do the trick though.

0 Kudos