Hi,
One of our customers has an MDS environment. They have an MLM as well. Both servers are hosted on VMware-based VMs. The whole deployment is running on version R80.40 (+ JHF Take_192).
We are planning to upgrade the deployment to R81.20. On the MLM, they have the following disk layout:
[Expert@xyzlog01:0]# fdisk -l
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.
Disk /dev/sda: 2199.0 GB, 2199023255552 bytes, 4294967296 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt
Disk identifier: xxxxxxxxxxxx
# Start End Size Type Name
1 34 614433 300M EFI System
2 614434 4294967262 2T Linux LVM
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.
Disk /dev/sdb: 2199.0 GB, 2199023255552 bytes, 4294967296 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt
Disk identifier: xxxxxxxxxxxxxx
# Start End Size Type Name
1 34 134207044 64G EFI System
2 134207045 4294967262 2T Linux LVM
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.
Disk /dev/sdc: 2199.0 GB, 2199023255552 bytes, 4294967296 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt
Disk identifier: xxxxxxxxxx
# Start End Size Type Name
1 34 4294967262 2T EFI System
[Expert@xyzlog01:0]# lvm_manager
LVM overview
============
Size(GB) Used(GB) Configurable Description
lv_current 200 36 yes Check Point OS and products
lv_log 5500 5469 yes Logs volume
upgrade 220 N/A no Reserved for version upgrade
swap 64 N/A no Swap volume size
free 159 N/A no Unused space
------- ----
total 6143 N/A no Total size
press ENTER to continue.
We believe this clearly indicates customer was following sk94671 and added two vHDDs to their MLM VM to increase disk space for their logs and log indices.
Back on the MDS/MLM upgrade from R80.40 to R81.20, on the MLM Gaia Portal, CPUSE does not offer the R81.20 upgrade package. Manually importing the package says the following:
xyzlog01> installer import local Check_Point_R81.20_T631_Fresh_Install_and_Upgrade.tar
Preparing package for import. This operation might take a few moments
Note: The selected package will be copied into CPUSE repository
Info: Initiating import of Check_Point_R81.20_T631_Fresh_Install_and_Upgrade.tar...
Interactive mode is enabled. Press CTRL + C to exit (this will not stop the operation)
Result: The following results are not compatible with the package:
- Your partitions are not in Check Point standard format, and an upgrade is not possible. Reinstall your system and verify correct partitioning.
This installation package is not supported on Cloud environments (Microsoft Azure, Google Cloud, Amazon Web Service and Aliyun)
This installation package may not be supported on your appliance model.
For the latest software images for Check Point appliances, see sk166536
This installation package may not be supported on your server.
The installation is supported only on servers with standard Check Point partition formats.
This points us to sk180769 Scenario 1, which cleanly states that:
"Gaia OS was installed on a Virtual Machine that has more then one virtual hard disk.
This configuration is not supported and causes incorrect partitioning."
Then in the "Solution" section, it mentions sk94671 among the solution steps.
As we understand from the above, we need to recreate the VM from scratch and need to handle (backup, restore, copy, move etc.) 5+ TB logs during this situation, which will be a very tedious and disk space intensive task. From this perspective, the whole situation looks a bit weird - sk180769 states the current customer setup is unsupported (do not use more than one vHDD), however it advises to follow sk94671 (add additional vHDDs) which results a then unsupported situation.
What do you think? Do you have similar experience?
Can you recommend a better approach (other than the above, reinstalling the MLM) to this situation, please?