Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Mark_Tremblay
Explorer

VSX cluster Hardware Upgrade

Hi All,

We are currently running two 12600s in a VSX cluster on R80.30. We have purchased two new 7000s and would like to migrate the current VSX cluster to the new hardware. Is there a recommended migration path/doc for this scenerio?
 
Thanks,
 
Mark
0 Kudos
7 Replies
Bob_Zimmerman
Mentor
Mentor

This block of shell code run in expert mode will tell you how many times each physical interface or bond is used:

{ vsenv 0 >/dev/null 2>&1;for iface in $(ifconfig|egrep '^[^ ]'|cut -d' ' -f1);do ip addr show $iface|grep inet>/dev/null;if [ $? -eq 0 ];then echo "$iface";fi;done;for vsid in $(ls /proc/vrf/|sort -n|tail -n +2);do vsenv $vsid>/dev/null;ifconfig|egrep '^[^ ]'|cut -d' ' -f1;done }|egrep -v '^(lo|usb|wrpj?|br)[0-9]*$'|cut -d. -f1|sort|uniq -c

Are you using any interface names which don't exist on the new box?

0 Kudos
Mark_Tremblay
Explorer

We have a bond interface that hasn't been created on the new boxes.

0 Kudos
Bob_Zimmerman
Mentor
Mentor

That's not a problem, fortunately. You just need to build the bond before you run your 'vsx_util reconfigure'. What would be a problem is if you were using eth3-## interfaces directly. The 7000 only has two card slots. If you're using interfaces which cannot exist on the new box, you will have to use 'vsx_util change_interfaces' first to move to interfaces which can exist. The ideal VSX deployment has everything in bonds, as bonds are easy to move between physical ports on the members.

If you're going to a different version (I don't know if the 7000 can run R80.30), you will have to use 'vsx_util upgrade' on the management to update the object definitions to the new version.

If all of your interfaces can exist on the new boxes, you need to make your bonds, run through the initial config (either via web UI or config_system), shut down the member you are replacing, use 'vsx_util reconfigure' on the management to have the new physical box take over the old object, then apply any dynamic routing configuration to the VSs. The process is then repeated for the other member.

If all the boxes are running the same version, you can also add the new members to the cluster with 'vsx_util add_member' and remove the old members from the cluster with 'vsx_util remove_member'. This is commonly done if you have hardware information in your hostnames.

0 Kudos
Mark_Tremblay
Explorer

Ok, I see where you were going with the initial interface question.  We are using the bond interface and all of the eth1-xx interfaces.  We were not planning on upgrading so we will have to check and make sure the 7000s can run with R80.30.  Which method do you see used more often - "vsx_util reconfigure" or "add_member" then "remove_member"?  

Thanks again!

0 Kudos
RamGuy239
Advisor

Check Point 7000 appliance can't run GAiA R80.30. The CPU in the 7000-series requires 3.10 kernel and R80.30 only has 2.6 kernel. There is a special release of R80.30 with 3.10 kernel that can be used:

https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails=&solut...


But this is not a widely adopted or recommended release. These special 3.10 kernel releases of R80.20 and R80.30 got released because some newer appliances and open severs required 3.10 kernel because of their CPU's so Check Point had to come up with R80.20 and R80.30 releases with 3.10 kernel as the 3.10 kernel for gateways got pushed to R80.40.

It's R80.40 that is the first regular release of GAiA that features 3.10 kernel for gateways. I would recommend you to consider going with R80.40, R81 or R81.10 instead of going with the limited R80.30 3.10 release.


Certifications: CCSA, CCSE, CCSM, CCSM ELITE, CCTA, CCTE, CCVS, CCME
0 Kudos
Bob_Zimmerman
Mentor
Mentor

Oof. Yeah, I would not recommend running a special release. That means this swap will involve an upgrade at the same time as a member replacement. Fortunately, that's not a big deal, as member replacements are the recommended way to upgrade VSX firewalls anyway! Here are the rough steps:

  1. Take a backup of your management server. For a SmartCenter, 'migrate export' works. For an MDS, mds_backup.
  2. On the management, use 'vsx_util upgrade' to update the objects on the management server to the new version. This also builds the policy for the new version, which usually takes a few minutes per context. This doesn't push the policies, it just compiles them for the reconfigure. You won't be able to push after this until you upgrade at least one member.
  3. Set up the replacement member 1. I would go with R80.40, as any version which can manage R80.30 can also manage R80.40 (though it might require a jumbo on the management). Go through the first-time setup (either web UI or via config_system), update CPUSE, pick a jumbo and install it.
  4. Shut down the old member 1 and plug the cables into the new member 1.
  5. Use 'vsx_util reconfigure' on the management to establish SIC with the new member 1 and have it take over the object. This will push all the interface definitions and static routes down to the new member 1.
  6. Apply any dynamic routing configuration to the VSs.
  7. Here, you can do whatever kind of failover you want. MVC allows R80.30 and R80.40 to sync. You can also just tell people that ongoing traffic won't survive the upgrade, kill the old member 2, and the new member 1 will take over.
  8. Repeat steps 3, 4, 5, and 6 for member 2.

For an upgrade, it's recommended to combine steps 3 and 4. You shut down the old member 1, then reinstall the OS from scratch and treat it like a whole new member. It's basically the same process (minus the 'vsx_util upgrade') to replace a failed member, too.

0 Kudos
Mark_Tremblay
Explorer

Thanks guys for all the info.  I've got to rap my head around all of these steps and put together a plan.

0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events