- Products
- Learn
- Local User Groups
- Partners
- More
Introduction to Lakera:
Securing the AI Frontier!
Quantum Spark Management Unleashed!
Check Point Named Leader
2025 Gartner® Magic Quadrant™ for Hybrid Mesh Firewall
HTTPS Inspection
Help us to understand your needs better
CheckMates Go:
SharePoint CVEs and More!
Hi Everyone,
Since our customers starts migrating from R77 to r80.10, i wonder how we could increase a VM-Hosted disk space.
For those wo will answer lvm_manager without checking, please not this is stated as officially unsupported on both interesting sk94671 and sk95566, respectively on top and at the bottom of those articles.
Any clue?
If lvm_manager is unsupported, should we use lvextend & resize2fs? I'm pretty sure this will be even less supported.
Best regards.
We have R80.10 MLM with 6TB disk(s). When creating VM, create 3 X 2TB disks for example sda, sdb and sdc. Select sda to install Gaia and at the next step when setting partition sizes, you will be able to utilise combined disk space of 6TB.
Hi Kaspars,
Thanks for your answer. Indeed i do know this.
My concern is for an already installed system, on which the disk space is almost full.
Until stated as unsupported, we will have added an extra vmdk disk to the vm, then used pvcreate, vgextend and lvm_manager to affect this new disk space to the log or root partition.
We might also want to reaffect some space, for instance my customer has a MDS with nearly 400GB root partition, where only 100GB would be big enough for a MDS lifetime.
If i want to deallocate 300GB from root and allocate more disk space to the log volume, how would we do without lvm_manager supported ?
Regards.
Sorry, my bad - misunderstood the question. I guess one way would be by creating a new VM (if that's an option) with required disk partition sizes and then restoring backup from the old VM on it and moving logs (technically you could allocate a temp IP to OLD one to be able to move logs directly from OLD to NEW). Really depends on your access to ESX
Check Point R80.10 Known Limitations suggests you may be able to boot into Maintenance Mode and do it.
That said I would take backups first.
Thanks!
Indeed, the limitation appears clearly on this SK.
So as it may be used in maintenance mode, and for vm environment, i believe we could create a Snapshot at VMware level before performing the change in maintenance mode.
Could we then say "lvm_manager is no longer supported in normal mode" & "lvm_manager is supported in maintenance mode" ?
I am also concern about any problem which may appear a few days after performing the operation (after a success in the first time), where we will open a service request, may the support tells us we did an unsupported operation, and therefore let us solve the problem by rolling back the VMware snapshot?
Since the limitations-sk says the issue is with killing some PID, I suspect it has more to do with the lvm_manager-script than the actual process of extending the lvm. If the process goes through I have a hard time seeing why someone would object to you extending the disk. I have used the lvm_manager multiple times in R80.X before noticing last week that there was a change to the SK removing the support for R80.X (as far as I can tell these notes were added late November...). In my cases I've always stopped all processes (cpstop) prior to running the lvm_manager-script, just so that I know that there is no major disk-access and I haven't (so far) had any issues with it.
Cheers
Hi, I also extended the disk on an R80.10 vSEC Management Server running in Azure by using LVM_Manager. At rollout time I had added a disk but couldn't use it till lvm_manager was used.
I first stopped the VM and ran an Azure Backup before doing this to be safe.
I wonder who had bad experiences and what they were and why Check Point is clearly stating it's not supported while it does seem to work.
Could we then say "lvm_manager is no longer supported in normal mode" & "lvm_manager is supported in maintenance mode" ?
This is how I interpret the two SKs anyway.
I've asked a couple folks from R&D to weigh in on this.
One note about a VMware snapshot: I recommend doing it with the VM powered off.
I've heard inconsistent results from those doing it while the VM was powered on.
Hey Dameon, did you ever get more feedback from R&D on this? I have a customer with the same question. If they start with a 5150 with 24TB on R80.10, if down the road they expand to 48TB, whats the process to do that.
SK95566 now clearly says:
- Perform a backup/snapshot before proceeding!!!
- Reboot and perform the operation in Maintenance Mode only!!!
Which applies for R80.x as well.
This works very well for me with lvm_manager (in maintenance mode, of course!).
Leaderboard
Epsum factorial non deposit quid pro quo hic escorol.
User | Count |
---|---|
24 | |
15 | |
4 | |
3 | |
3 | |
3 | |
3 | |
3 | |
2 | |
2 |
Tue 30 Sep 2025 @ 08:00 AM (EDT)
Tips and Tricks 2025 #13: Strategic Cyber Assessments: How to Strengthen Your Security PostureTue 07 Oct 2025 @ 10:00 AM (CEST)
Cloud Architect Series: AI-Powered API Security with CloudGuard WAFTue 30 Sep 2025 @ 08:00 AM (EDT)
Tips and Tricks 2025 #13: Strategic Cyber Assessments: How to Strengthen Your Security PostureThu 09 Oct 2025 @ 10:00 AM (CEST)
CheckMates Live BeLux: Discover How to Stop Data Leaks in GenAI Tools: Live Demo You Can’t Miss!Wed 22 Oct 2025 @ 11:00 AM (EDT)
Firewall Uptime, Reimagined: How AIOps Simplifies Operations and Prevents OutagesAbout CheckMates
Learn Check Point
Advanced Learning
YOU DESERVE THE BEST SECURITY