Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
HansKazan
Contributor
Jump to solution

lvm_manager remote session timed out, no longer able to allocate space

Hello everyone.

Thank you for taking the time to read and possibly reply to my problem. I have been looking for a solution to this problem for a while and would like to know if anyone has a better solution.

I want to upgrade an OpenServer Gateway running 80.40 to R81.10. To do so I need to increase the root volume. The usual proces works fine (add diskspace in vmware and internally etc) I entered lvm_manager and increased the root volume from 16GB to 24GB. As this proccess was happening, I lost my remote session with the Gateway.

Upon re-entering the Gateway through my remote session and entering lvm_manager again, I am greeted with the following output:

"There is not enough available space in the system. Try to reduce other logical volumes' size and then try again. press ENTER to continue."

When viewing LVM storage overview, I can note that 10GB of free Unused space is still present in the overview. If I then try to increase the lv_log partition, this works without any issue. So I assume there is some kind of log or session being kept somewhere that persists through reboot and power down that is interfering with the proccess.

Would anyone happen to know how to recover from such a situation apart from reprovisioning the appliance.

Thanks a lot for your valuable input!

 

0 Kudos
1 Solution

Accepted Solutions
HansKazan
Contributor

Followed the suggestion in this thread;
https://community.checkpoint.com/t5/Management/Disk-Extend-Problem/m-p/36747/highlight/true#M7664

lvremove -f /dev/vg_splat/lv_extended_current

This removed the unfinished "new" root, allowed us to run lvm_manager resize again succesfully. Hope it helps someone in the future. Be mindful of the possible impact this may have if not done properly.

View solution in original post

0 Kudos
9 Replies
HansKazan
Contributor

Some additional information added as attachment. First Image is the session that got disconnected. After reconnecting I noticed with lvm_manager that the root hadn't increased. I waited for a period of ~15 minutes before initializing a manual reboot. On these devices this extension process only takes ~5 minutes. All the other Images are post reboot and powered down state.

step1.jpg

step2.jpg

step3.jpg

step4.jpg

  

0 Kudos
the_rock
Legend
Legend

This is just me, but personally, I would never do this process, unless I had console to the appliance. I had same thing happen to me before and thats valuable lesson learnt.

Andy

0 Kudos
HansKazan
Contributor

Thank you Andy, This will be a valuable lesson to me as well!

the_rock
Legend
Legend

Let us know how you solve it.

Andy

0 Kudos
Chris_Atkinson
Employee Employee
Employee

What is the HW platform used here and were you performing this process from maintenance mode?

 

CCSM R77/R80/ELITE
0 Kudos
HansKazan
Contributor

VMWare OpenServer, on top of an ESXi Hypervisor. Actual specs unknown to me but the device runs duocore.

I don't have the sk on hand that I've followed for the other 120 of these, but yes all devices were booted in maintenance mode to perform these actions. The problem here was that regular SSH and Console access was not foreseen at the time, but within the scope of work that I was asked to do. So I used the SmartConsole SSH shell to connect to that irregular Gateway. SmartConsole was unstable (was not reported) and went down, thus session lost. Because of that, I received access to the remote console, which are the latter 3 screenshots.

Redeployment was the easy fix as it doesn't take much to provision and stage a virtual appliance. However, I would like to know what I could have done alternatively that would look more kosher.

0 Kudos
G_W_Albrecht
Legend Legend
Legend

You should always boot into maintenance mode to use lvm_manager on root !

CCSE / CCTE / CTPS / CCME / CCSM Elite / SMB Specialist
0 Kudos
HansKazan
Contributor

The problem has been recreated and have additional input that I would like to add. Just incase anyone here knows what to do.

 Console output on boot

consolecli.png

[Expert@lmao:0]# dmsetup ls
vg_splat-lv_current_snap        (253:3)
vg_splat-lv_extended_current    (253:5)
vg_splat-lv_current_snap-cow    (253:2)
vg_splat-lv_current     (253:1)
vg_splat-lv_log (253:4)
vg_splat-lv_current-real        (253:0)

# dmsetup ls --tree
vg_splat-lv_current_snap (253:3)
|-vg_splat-lv_current_snap-cow (253:2)
|  `- (8:3)
`-vg_splat-lv_current-real (253:0)
    `- (8:3)
vg_splat-lv_extended_current (253:5)
`- (8:3)
vg_splat-lv_current (253:1)
`-vg_splat-lv_current-real (253:0)
    `- (8:3)
vg_splat-lv_log (253:4)
`- (8:3)

dm setup output on a normal gw
[Expert@lmao-2:0]# dmsetup ls
vg_splat-lv_current     (253:2)
vg_splat-lv_log (253:1)

 

This is the result of LVM_manager being interrupted during the resizing process. Is there a way to manually complete the steps that LVM_Manager usually takes or to "reset" this without impact? I have tried consulting all available documentation that I can find, but with no luck so far.

Thank you for any input you may have!

0 Kudos
HansKazan
Contributor

Followed the suggestion in this thread;
https://community.checkpoint.com/t5/Management/Disk-Extend-Problem/m-p/36747/highlight/true#M7664

lvremove -f /dev/vg_splat/lv_extended_current

This removed the unfinished "new" root, allowed us to run lvm_manager resize again succesfully. Hope it helps someone in the future. Be mindful of the possible impact this may have if not done properly.

0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events