Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
H1pp0o
Contributor
Jump to solution

PowerEdge upgrade to R81.20

Dear CheckPoint Community,

 

I’ve 2x Dell PowerEdge server running MDS in HA, version R81.10. During image import of R81.20 via CPUSE I received following error:

 

Your partitions are not in Check Point standard format, and an upgrade is not possible. Reinstall your system and verify correct partitioning.”

 

After quick search I found an SK: https://support.checkpoint.com/results/sk/sk180769

Platform mentioned is only Cloud or ESX but I do face it on my server. I’ve on each server 4x SDD drives:

H1pp0o_0-1718365572122.png

 

How should the upgrade be performed now ? Clean install via bootable USB ? If so, how to proceed with those 4x drives ? As in the SK it’s mentioned under solution section to install on one drive then to add rest of the storage ( but this solution in my opinion refer only to cloud or esx platform ).

 

 

Please advise.
Thank you.

0 Kudos
1 Solution

Accepted Solutions
Duane_Toler
Advisor

If all your PHY disks are in a logical volume on the RAID controller, (RAID 5 likely, with 4 disks), then it will appear as a single logic disk to the Gaia installer.  From there the Gaia installer will destroy and recreate its new partitions for LVM and format them for XFS.  XFS is another fantastic reason to wipe and re-install an older disk volume.

Be sure you have the usual complement of Gaia CLISH configuration backups, tarball archive of your /home directories and any scripts in /usr/local/bin or wherever.  Obviously, you'll have your MDS backup, and be sure you do it with the $FWDIR/scripts/migrate_server  method and target the export for the future version you're installing (R81.20 in this case). Do not use the old "migrate export", either.

 

View solution in original post

10 Replies
Duane_Toler
Advisor

Run "fdisk -l" on one of the hosts and you'll see the issue.  Here's why:

Like most other Check Point customers, you likely had these built originally with an earlier R80 or even R77.x version and upgraded over the years.  Those earlier versions used a different partition layout than what R81+ now wants.  Check Point developers have done "something" (I can't figure out what) to the Grub boot loader parameters to where it now requires a specific disk layout of only 3 partitions (or 4 if have a BIOS utilities partition on some servers; this partition seems to be ignored by the partition verifier).  I had a TAC case open for it, and that was the reply I got back.

I also tried to be "clever" and disabled this partition check, then did a Blink upgrade on a host like yours.  The install went fine, then on the first reboot, Grub failed to load and said it couldn't find the boot partition.  I tried in vain to fix it, but eventually I had to give up (not something I do lightly!); granted this would not have been a sustainable and repeatable fix, had I solved it.  I did a fresh install and it worked just fine.  The partition layout DID change, too, which was somewhat surprising.

Unfortunately, you will have to do a net-new install on this server from ISO (bootable USB, or spinning plastic discs).  However, looking back, I see now that the fresh-install method at that time ultimately was a good choice.

Someone else can explain what happened with the disk layout; I certainly don't know.  Regardless, I understand why they need to have this in a deterministic state so things like the Blink installer and images can be universal and predictable.  I'm highly impressed with the Blink mechanism, so if this is the price to pay, then I'm ok with paying it.

 

(1)
H1pp0o
Contributor

Thank you so much for your answer.

 

I saw a post on checkmates regarding this boot loader.

That was my thought that I've to do a clean install - but how to do it if I've 4xhdd (raid ). Does the installation handle it by itself ? And yes, this server was orginially R77x and with years upgraded to newer versions 🙂

 

I'll open also a TAC to see what they will come up with.

 

thanks !!

0 Kudos
Duane_Toler
Advisor

If all your PHY disks are in a logical volume on the RAID controller, (RAID 5 likely, with 4 disks), then it will appear as a single logic disk to the Gaia installer.  From there the Gaia installer will destroy and recreate its new partitions for LVM and format them for XFS.  XFS is another fantastic reason to wipe and re-install an older disk volume.

Be sure you have the usual complement of Gaia CLISH configuration backups, tarball archive of your /home directories and any scripts in /usr/local/bin or wherever.  Obviously, you'll have your MDS backup, and be sure you do it with the $FWDIR/scripts/migrate_server  method and target the export for the future version you're installing (R81.20 in this case). Do not use the old "migrate export", either.

 

the_rock
Legend
Legend

I think thats a good idea, for sure, better get an official TAC answer. I do see all the points Duane made, super valid.

Andy

0 Kudos
H1pp0o
Contributor

TAC wasn't helpful - just mentioned to perfom a clean install as R81.20 does not support ext3 just xfs.  Of course using all the advanced procedure to do it with migrate_server script.

 

thank you all for quick help !!

(1)
the_rock
Legend
Legend

Community to the rescue 🙂

Well, in this case, @Duane_Toler 👍

Andy

0 Kudos
the_rock
Legend
Legend

Btw, funny enough, I spoke with customer the other day from health care place and he told me they have poweredge devices, probably 10+ years old and want to upgrade them to R81.20. Im thinking, no offence, that would be like having Fiat 500 and wanting to drive 300 km/h, NOT gonna happen lol

Anyway, I told him its best if they purchase new CP appliances, which is what they wanted to do anyway, so will probably go with 6600s, those I heard are super solid.

Andy

0 Kudos
Duane_Toler
Advisor

Oh good grief, indeed!  Depending on just how old it the hardware is, they might not even have enough CPUs for R81.20!  I've had such a Dell server that had to be replaced just for that reason!  It only had 2 CPU cores. 🙂

 

Duane_Toler
Advisor

Errr... that's not entirely true, but ok:

[Expert@cpmgmt:0]# df -khT
Filesystem                      Type   Size  Used Avail Use% Mounted on
/dev/mapper/vg_splat-lv_current xfs     70G   22G   49G  31% /
/dev/sda2                       ext3   291M   45M  232M  17% /boot
/dev/mapper/vg_splat-lv_log     xfs    340G  105G  236G  31% /var/log
tmpfs                           tmpfs  7.8G   26M  7.7G   1% /dev/shm
cgroup                          tmpfs  7.8G     0  7.8G   0% /sys/fs/cgroup

 Regardless, XFS is the way to go for management.  Definitely worth the price of admission.

The boot partition will always be ext3, tho.

H1pp0o
Contributor

Do not want to paste the TAC answer here- but that's what they told  me 🙂

I checked the server spec. - should be fine with the installed memory and cpu. If not migrate to -> as next step 🙂

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events