Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Norbert_Giczi
Participant
Jump to solution

Upgrading MLM VM with more than one vHDD from R80.40 to R81.20

Hi,

 

One of our customers has an MDS environment. They have an MLM as well. Both servers are hosted on VMware-based VMs. The whole deployment is running on version R80.40 (+ JHF Take_192).

We are planning to upgrade the deployment to R81.20. On the MLM, they have the following disk layout:

[Expert@xyzlog01:0]# fdisk -l
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sda: 2199.0 GB, 2199023255552 bytes, 4294967296 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt
Disk identifier: xxxxxxxxxxxx


# Start End Size Type Name
1 34 614433 300M EFI System
2 614434 4294967262 2T Linux LVM
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sdb: 2199.0 GB, 2199023255552 bytes, 4294967296 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt
Disk identifier: xxxxxxxxxxxxxx

# Start End Size Type Name
1 34 134207044 64G EFI System
2 134207045 4294967262 2T Linux LVM
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sdc: 2199.0 GB, 2199023255552 bytes, 4294967296 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt
Disk identifier: xxxxxxxxxx


# Start End Size Type Name
1 34 4294967262 2T EFI System



[Expert@xyzlog01:0]# lvm_manager
LVM overview
============
Size(GB) Used(GB) Configurable Description
lv_current 200 36 yes Check Point OS and products
lv_log 5500 5469 yes Logs volume
upgrade 220 N/A no Reserved for version upgrade
swap 64 N/A no Swap volume size
free 159 N/A no Unused space
------- ----
total 6143 N/A no Total size

press ENTER to continue.

 

We believe this clearly indicates customer was following sk94671 and added two vHDDs to their MLM VM to increase disk space for their logs and log indices.

Back on the MDS/MLM upgrade from R80.40 to R81.20, on the MLM Gaia Portal, CPUSE does not offer the R81.20 upgrade package. Manually importing the package says the following:

xyzlog01> installer import local Check_Point_R81.20_T631_Fresh_Install_and_Upgrade.tar
Preparing package for import. This operation might take a few moments
Note: The selected package will be copied into CPUSE repository
Info: Initiating import of Check_Point_R81.20_T631_Fresh_Install_and_Upgrade.tar...
Interactive mode is enabled. Press CTRL + C to exit (this will not stop the operation)
Result: The following results are not compatible with the package:
- Your partitions are not in Check Point standard format, and an upgrade is not possible. Reinstall your system and verify correct partitioning.
This installation package is not supported on Cloud environments (Microsoft Azure, Google Cloud, Amazon Web Service and Aliyun)
This installation package may not be supported on your appliance model.
For the latest software images for Check Point appliances, see sk166536
This installation package may not be supported on your server.
The installation is supported only on servers with standard Check Point partition formats.

This points us to sk180769 Scenario 1, which cleanly states that:

"Gaia OS was installed on a Virtual Machine that has more then one virtual hard disk.

This configuration is not supported and causes incorrect partitioning."

 

Then in the "Solution" section, it mentions sk94671 among the solution steps.

As we understand from the above, we need to recreate the VM from scratch and need to handle (backup, restore, copy, move etc.) 5+ TB logs during this situation, which will be a very tedious and disk space intensive task. From this perspective, the whole situation looks a bit weird - sk180769 states the current customer setup is unsupported (do not use more than one vHDD), however it advises to follow sk94671 (add additional vHDDs) which results a then unsupported situation.

 

What do you think? Do you have similar experience?

Can you recommend a better approach (other than the above, reinstalling the MLM) to this situation, please?

1 Solution

Accepted Solutions
Amir_Senn
Employee
Employee

Hi,

I encountered this. I'm not familiar with technical reasons but this is one of the pre-upgrade-validations.

I suggest using advanced upgrade on MLM. During export you can use -l flag to export logs as well as DB. No point of exporting indexes as well since R81+ uses newer version of SOLR that uses newer set of indexes anyway. This will reduce export time.

After exporting you can suspend/stop MLM VM and import to the new VM with one larger vHDD.

This way you do not need any additional actions but the upgrade itself and the creation of a new VM (+OS installation and first time wizard building another MLM).

Hope this helps.

Kind regards, Amir Senn

View solution in original post

(1)
9 Replies
zsszlama
Contributor
Contributor

real showstopper! anyone else encountered the same?

Amir_Senn
Employee
Employee

Hi,

I encountered this. I'm not familiar with technical reasons but this is one of the pre-upgrade-validations.

I suggest using advanced upgrade on MLM. During export you can use -l flag to export logs as well as DB. No point of exporting indexes as well since R81+ uses newer version of SOLR that uses newer set of indexes anyway. This will reduce export time.

After exporting you can suspend/stop MLM VM and import to the new VM with one larger vHDD.

This way you do not need any additional actions but the upgrade itself and the creation of a new VM (+OS installation and first time wizard building another MLM).

Hope this helps.

Kind regards, Amir Senn
(1)
Norbert_Giczi
Participant

Hi Amir,

Thank you for your response. Your suggestion makes sense on the MLM upgrade itself. However, how do you handle possible further disk space increase demands? You start following sk94671 again? I assume the pre-upgrade verification will remain in R81.20+ versions, thus blocking the upgrade on MLM VM having multi-vHDD. So we may have to redo the whole procedure again ( by moving to a new VM having one vHDD to be able to upgrade.

Best regards, Norbert

Amir_Senn
Employee
Employee

I will try to find out.

Kind regards, Amir Senn
Norbert_Giczi
Participant

After discussing this topic with the TAC as well, I carefully went through the partition layout of our customer MLM VM. It seems that we missed something earlier. Looking at the blkid output below I noticed that the swap partition is on the second disk (/dev/sdb1), but based on the information in sk94671, it should be on /dev/sda. This might be the real reason why R81.20 refuses to upgrade.

# blkid

/dev/sda1: LABEL="/boot" UUID="xxxxxx-9f46-4100-b85c-xxxxxxxxxxx" TYPE="ext3" PARTUUID="xxxxxx-7fff-4806-9374-xxxxxx"

/dev/sda2: UUID="ZRzcsM-xxxx-49hJ-xxx-xxx-zByG-xxxxxx" TYPE="LVM2_member" PARTUUID="xxxx-5475-47d0-80f2-xxxxxx"

/dev/sdb1: LABEL="SWAP-sdb1" UUID="xxxxx-3b79-4311-aafc-xxxxxxx" TYPE="swap" PARTUUID="xxxxxx-f5b9-449f-92b2-xxxxxx"

/dev/sdb2: UUID="xxxx-16u2-niTC-Bj41-o4ra-JOar-xxxxx" TYPE="LVM2_member" PARTUUID="xxxxx-460b-434a-a17f-xxxxx"

/dev/sdc1: UUID="xxxxx-IZ0I-Lonr-bEF9-2wtZ-cFCH-xxxxx" TYPE="LVM2_member" PARTUUID="xxxxx-4c5a-4686-97a5-xxxxxxx"

 

As a conclusion, our customer cannot avoid re-creating the MLM VM and its disk layout from scratch, because it seems it is indeed  not aligned with the official requirements.

Now I can also see why it should work with future versions after they re-create the disk layout.

 

Best Regards, Norbert

0 Kudos
Amir_Senn
Employee
Employee

Looks like this is will also needs to be considered for future upgrades.

I suggest using a bigger vHDD for the VM and maybe consider changing some of the log retention policy attributes if possible.

Kind regards, Amir Senn
0 Kudos
Jerry
Mentor
Mentor

quick question Amir though, imagine you have MDS with lots of CMA's (Domain) and MLM with just 5 CLM/DSL servers.

1. Your MDS runs R81.10

2. Your MLM runs R81.10

 

Question

 

can you bypass "unhappy" verifier via CPUSE and upgrade MLM to R81.20 knowing that your MDS will be .20 as well few days later and you won't need "LOGS" until MDS is on .20? Meaning can you technically have a SIC established when your MDS is s till on .10 and your MLM is already on .20 or you'd rather WAIT till MDS runs .20 then upgrade MLM to .20 to "comply"?

 

Just qurious what would be your stance on that subject.

Thanks in advance.

Cheers!

Jerry
0 Kudos
Amir_Senn
Employee
Employee

MDS and single domain management server normally can't manage servers with higher version, at least not normally.

Haven't tested it myself but I would say that even if you manage to bypass the verifier and upgrade the server, it would fail to sync properly after import and would revert back to source version.

I suggest waiting with MLM after MDS upgrade.

Kind regards, Amir Senn
(1)
Jerry
Mentor
Mentor

Cheers Amir, that's exactly what  wanted to know and I'll be happy to pas it onto the Customer's Team. TOP MAN! Thank you!

Jerry

Jerry
0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events