- Products
- Learn
- Local User Groups
- Partners
- More
Firewall Uptime, Reimagined
How AIOps Simplifies Operations and Prevents Outages
Introduction to Lakera:
Securing the AI Frontier!
Check Point Named Leader
2025 Gartner® Magic Quadrant™ for Hybrid Mesh Firewall
HTTPS Inspection
Help us to understand your needs better
CheckMates Go:
SharePoint CVEs and More!
Hi,
One of our customers has an MDS environment. They have an MLM as well. Both servers are hosted on VMware-based VMs. The whole deployment is running on version R80.40 (+ JHF Take_192).
We are planning to upgrade the deployment to R81.20. On the MLM, they have the following disk layout:
[Expert@xyzlog01:0]# fdisk -l
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.
Disk /dev/sda: 2199.0 GB, 2199023255552 bytes, 4294967296 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt
Disk identifier: xxxxxxxxxxxx
# Start End Size Type Name
1 34 614433 300M EFI System
2 614434 4294967262 2T Linux LVM
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.
Disk /dev/sdb: 2199.0 GB, 2199023255552 bytes, 4294967296 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt
Disk identifier: xxxxxxxxxxxxxx
# Start End Size Type Name
1 34 134207044 64G EFI System
2 134207045 4294967262 2T Linux LVM
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.
Disk /dev/sdc: 2199.0 GB, 2199023255552 bytes, 4294967296 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt
Disk identifier: xxxxxxxxxx
# Start End Size Type Name
1 34 4294967262 2T EFI System
[Expert@xyzlog01:0]# lvm_manager
LVM overview
============
Size(GB) Used(GB) Configurable Description
lv_current 200 36 yes Check Point OS and products
lv_log 5500 5469 yes Logs volume
upgrade 220 N/A no Reserved for version upgrade
swap 64 N/A no Swap volume size
free 159 N/A no Unused space
------- ----
total 6143 N/A no Total size
press ENTER to continue.
We believe this clearly indicates customer was following sk94671 and added two vHDDs to their MLM VM to increase disk space for their logs and log indices.
Back on the MDS/MLM upgrade from R80.40 to R81.20, on the MLM Gaia Portal, CPUSE does not offer the R81.20 upgrade package. Manually importing the package says the following:
xyzlog01> installer import local Check_Point_R81.20_T631_Fresh_Install_and_Upgrade.tar
Preparing package for import. This operation might take a few moments
Note: The selected package will be copied into CPUSE repository
Info: Initiating import of Check_Point_R81.20_T631_Fresh_Install_and_Upgrade.tar...
Interactive mode is enabled. Press CTRL + C to exit (this will not stop the operation)
Result: The following results are not compatible with the package:
- Your partitions are not in Check Point standard format, and an upgrade is not possible. Reinstall your system and verify correct partitioning.
This installation package is not supported on Cloud environments (Microsoft Azure, Google Cloud, Amazon Web Service and Aliyun)
This installation package may not be supported on your appliance model.
For the latest software images for Check Point appliances, see sk166536
This installation package may not be supported on your server.
The installation is supported only on servers with standard Check Point partition formats.
This points us to sk180769 Scenario 1, which cleanly states that:
"Gaia OS was installed on a Virtual Machine that has more then one virtual hard disk.
This configuration is not supported and causes incorrect partitioning."
Then in the "Solution" section, it mentions sk94671 among the solution steps.
As we understand from the above, we need to recreate the VM from scratch and need to handle (backup, restore, copy, move etc.) 5+ TB logs during this situation, which will be a very tedious and disk space intensive task. From this perspective, the whole situation looks a bit weird - sk180769 states the current customer setup is unsupported (do not use more than one vHDD), however it advises to follow sk94671 (add additional vHDDs) which results a then unsupported situation.
What do you think? Do you have similar experience?
Can you recommend a better approach (other than the above, reinstalling the MLM) to this situation, please?
Hi,
I encountered this. I'm not familiar with technical reasons but this is one of the pre-upgrade-validations.
I suggest using advanced upgrade on MLM. During export you can use -l flag to export logs as well as DB. No point of exporting indexes as well since R81+ uses newer version of SOLR that uses newer set of indexes anyway. This will reduce export time.
After exporting you can suspend/stop MLM VM and import to the new VM with one larger vHDD.
This way you do not need any additional actions but the upgrade itself and the creation of a new VM (+OS installation and first time wizard building another MLM).
Hope this helps.
real showstopper! anyone else encountered the same?
Hi,
I encountered this. I'm not familiar with technical reasons but this is one of the pre-upgrade-validations.
I suggest using advanced upgrade on MLM. During export you can use -l flag to export logs as well as DB. No point of exporting indexes as well since R81+ uses newer version of SOLR that uses newer set of indexes anyway. This will reduce export time.
After exporting you can suspend/stop MLM VM and import to the new VM with one larger vHDD.
This way you do not need any additional actions but the upgrade itself and the creation of a new VM (+OS installation and first time wizard building another MLM).
Hope this helps.
Hi Amir,
Thank you for your response. Your suggestion makes sense on the MLM upgrade itself. However, how do you handle possible further disk space increase demands? You start following sk94671 again? I assume the pre-upgrade verification will remain in R81.20+ versions, thus blocking the upgrade on MLM VM having multi-vHDD. So we may have to redo the whole procedure again ( by moving to a new VM having one vHDD to be able to upgrade.
Best regards, Norbert
I will try to find out.
After discussing this topic with the TAC as well, I carefully went through the partition layout of our customer MLM VM. It seems that we missed something earlier. Looking at the blkid output below I noticed that the swap partition is on the second disk (/dev/sdb1), but based on the information in sk94671, it should be on /dev/sda. This might be the real reason why R81.20 refuses to upgrade.
# blkid
/dev/sda1: LABEL="/boot" UUID="xxxxxx-9f46-4100-b85c-xxxxxxxxxxx" TYPE="ext3" PARTUUID="xxxxxx-7fff-4806-9374-xxxxxx"
/dev/sda2: UUID="ZRzcsM-xxxx-49hJ-xxx-xxx-zByG-xxxxxx" TYPE="LVM2_member" PARTUUID="xxxx-5475-47d0-80f2-xxxxxx"
/dev/sdb1: LABEL="SWAP-sdb1" UUID="xxxxx-3b79-4311-aafc-xxxxxxx" TYPE="swap" PARTUUID="xxxxxx-f5b9-449f-92b2-xxxxxx"
/dev/sdb2: UUID="xxxx-16u2-niTC-Bj41-o4ra-JOar-xxxxx" TYPE="LVM2_member" PARTUUID="xxxxx-460b-434a-a17f-xxxxx"
/dev/sdc1: UUID="xxxxx-IZ0I-Lonr-bEF9-2wtZ-cFCH-xxxxx" TYPE="LVM2_member" PARTUUID="xxxxx-4c5a-4686-97a5-xxxxxxx"
As a conclusion, our customer cannot avoid re-creating the MLM VM and its disk layout from scratch, because it seems it is indeed not aligned with the official requirements.
Now I can also see why it should work with future versions after they re-create the disk layout.
Best Regards, Norbert
Looks like this is will also needs to be considered for future upgrades.
I suggest using a bigger vHDD for the VM and maybe consider changing some of the log retention policy attributes if possible.
quick question Amir though, imagine you have MDS with lots of CMA's (Domain) and MLM with just 5 CLM/DSL servers.
1. Your MDS runs R81.10
2. Your MLM runs R81.10
Question
can you bypass "unhappy" verifier via CPUSE and upgrade MLM to R81.20 knowing that your MDS will be .20 as well few days later and you won't need "LOGS" until MDS is on .20? Meaning can you technically have a SIC established when your MDS is s till on .10 and your MLM is already on .20 or you'd rather WAIT till MDS runs .20 then upgrade MLM to .20 to "comply"?
Just qurious what would be your stance on that subject.
Thanks in advance.
Cheers!
MDS and single domain management server normally can't manage servers with higher version, at least not normally.
Haven't tested it myself but I would say that even if you manage to bypass the verifier and upgrade the server, it would fail to sync properly after import and would revert back to source version.
I suggest waiting with MLM after MDS upgrade.
Cheers Amir, that's exactly what wanted to know and I'll be happy to pas it onto the Customer's Team. TOP MAN! Thank you!
Jerry
Leaderboard
Epsum factorial non deposit quid pro quo hic escorol.
User | Count |
---|---|
27 | |
16 | |
4 | |
4 | |
3 | |
3 | |
3 | |
3 | |
3 | |
2 |
Tue 07 Oct 2025 @ 10:00 AM (CEST)
Cloud Architect Series: AI-Powered API Security with CloudGuard WAFThu 09 Oct 2025 @ 10:00 AM (CEST)
CheckMates Live BeLux: Discover How to Stop Data Leaks in GenAI Tools: Live Demo You Can’t Miss!Thu 09 Oct 2025 @ 10:00 AM (CEST)
CheckMates Live BeLux: Discover How to Stop Data Leaks in GenAI Tools: Live Demo You Can’t Miss!Wed 22 Oct 2025 @ 11:00 AM (EDT)
Firewall Uptime, Reimagined: How AIOps Simplifies Operations and Prevents OutagesAbout CheckMates
Learn Check Point
Advanced Learning
YOU DESERVE THE BEST SECURITY