Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Bruno_LABOUR
Participant

lvm_manager successor on R80.10

Hi Everyone,

Since our customers starts migrating from R77 to r80.10, i wonder how we could increase a VM-Hosted  disk space.

For those wo will answer lvm_manager without checking, please not this is stated as officially unsupported on both interesting sk94671 and sk95566, respectively on top and at the bottom of those articles.

Any clue?

If lvm_manager is unsupported, should we use lvextend & resize2fs? I'm pretty sure this will be even less supported.

Best regards.

11 Replies
Kaspars_Zibarts
Employee Employee
Employee

We have R80.10 MLM with 6TB disk(s). When creating VM, create 3 X 2TB disks for example sda, sdb and sdc. Select sda to install Gaia and at the next step when setting partition sizes, you will be able to utilise combined disk space of 6TB.

0 Kudos
Bruno_LABOUR
Participant

Hi Kaspars,

Thanks for your answer. Indeed i do know this.

My concern is for an already installed system, on which the disk space is almost full.

Until stated as unsupported, we will have added an extra vmdk disk to the vm, then used pvcreate, vgextend and lvm_manager to affect this new disk space to the log or root partition.

We might also want to reaffect some space, for instance my customer has a MDS with nearly 400GB root partition, where only 100GB would be big enough for a MDS lifetime.

If i want to deallocate 300GB from root and allocate more disk space to the log volume, how would we do without lvm_manager supported ?

Regards.

0 Kudos
Kaspars_Zibarts
Employee Employee
Employee

Sorry, my bad - misunderstood the question. I guess one way would be by creating a new VM (if that's an option) with required disk partition sizes and then restoring backup from the old VM on it and moving logs (technically you could allocate a temp IP to OLD one to be able to move logs directly from OLD to NEW). Really depends on your access to ESX

0 Kudos
PhoneBoy
Admin
Admin

Check Point R80.10 Known Limitations suggests you may be able to boot into Maintenance Mode and do it.

That said I would take backups first. 

Bruno_LABOUR
Participant

Thanks!

Indeed, the limitation appears clearly on this SK.

So as it may be used in maintenance mode, and for vm environment, i believe we could create a Snapshot at VMware level before performing the change in maintenance mode.

Could we then say "lvm_manager is no longer supported in normal mode" & "lvm_manager is supported in maintenance mode" ?

I am also concern about any problem which may appear a few days after performing the operation (after a success in the first time), where we will open a service request, may the support tells us we did an unsupported operation, and therefore let us solve the problem by rolling back the VMware snapshot?

0 Kudos
MikaelJohnsson
Contributor

Since the limitations-sk says the issue is with killing some PID, I suspect it has more to do with the lvm_manager-script than the actual process of extending the lvm. If the process goes through I have a hard time seeing why someone would object to you extending the disk. I have used the lvm_manager multiple times in R80.X before noticing last week that there was a change to the SK removing the support for R80.X (as far as I can tell these notes were added late November...). In my cases I've always stopped all processes (cpstop) prior to running the lvm_manager-script, just so that I know that there is no major disk-access and I haven't (so far) had any issues with it.

Cheers

Jeroen_Demets
Collaborator

Hi, I also extended the disk on an R80.10 vSEC Management Server running in Azure by using LVM_Manager. At rollout time I had added a disk but couldn't use it till lvm_manager was used.

I first stopped the VM and ran an Azure Backup before doing this to be safe.

I wonder who had bad experiences and what they were and why Check Point is clearly stating it's not supported while it does seem to work.

0 Kudos
PhoneBoy
Admin
Admin

Could we then say "lvm_manager is no longer supported in normal mode" & "lvm_manager is supported in maintenance mode" ?

This is how I interpret the two SKs anyway. 

I've asked a couple folks from R&D to weigh in on this.

One note about a VMware snapshot: I recommend doing it with the VM powered off.

I've heard inconsistent results from those doing it while the VM was powered on. 

Mike_Walsh
Employee Alumnus
Employee Alumnus

Hey Dameon,  did you ever get more feedback from R&D on this?  I have a customer with the same question.  If they start with a 5150 with 24TB on R80.10,  if down the road they expand to 48TB,  whats the process to do that. 

0 Kudos
PhoneBoy
Admin
Admin

SK95566 now clearly says:

  • Perform a backup/snapshot before proceeding!!!
  • Reboot and perform the operation in Maintenance Mode only!!!

Which applies for R80.x as well. 

0 Kudos
G_W_Albrecht
Legend
Legend

This works very well for me with lvm_manager (in maintenance mode, of course!).

CCSE CCTE CCSM SMB Specialist
0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events