- Products
- Learn
- Local User Groups
- Partners
- More
Introduction to Lakera:
Securing the AI Frontier!
Quantum Spark Management Unleashed!
Check Point Named Leader
2025 Gartner® Magic Quadrant™ for Hybrid Mesh Firewall
HTTPS Inspection
Help us to understand your needs better
CheckMates Go:
SharePoint CVEs and More!
Hello everyone.
Thank you for taking the time to read and possibly reply to my problem. I have been looking for a solution to this problem for a while and would like to know if anyone has a better solution.
I want to upgrade an OpenServer Gateway running 80.40 to R81.10. To do so I need to increase the root volume. The usual proces works fine (add diskspace in vmware and internally etc) I entered lvm_manager and increased the root volume from 16GB to 24GB. As this proccess was happening, I lost my remote session with the Gateway.
Upon re-entering the Gateway through my remote session and entering lvm_manager again, I am greeted with the following output:
"There is not enough available space in the system. Try to reduce other logical volumes' size and then try again. press ENTER to continue."
When viewing LVM storage overview, I can note that 10GB of free Unused space is still present in the overview. If I then try to increase the lv_log partition, this works without any issue. So I assume there is some kind of log or session being kept somewhere that persists through reboot and power down that is interfering with the proccess.
Would anyone happen to know how to recover from such a situation apart from reprovisioning the appliance.
Thanks a lot for your valuable input!
Followed the suggestion in this thread;
https://community.checkpoint.com/t5/Management/Disk-Extend-Problem/m-p/36747/highlight/true#M7664
lvremove -f /dev/vg_splat/lv_extended_current
This removed the unfinished "new" root, allowed us to run lvm_manager resize again succesfully. Hope it helps someone in the future. Be mindful of the possible impact this may have if not done properly.
Some additional information added as attachment. First Image is the session that got disconnected. After reconnecting I noticed with lvm_manager that the root hadn't increased. I waited for a period of ~15 minutes before initializing a manual reboot. On these devices this extension process only takes ~5 minutes. All the other Images are post reboot and powered down state.
This is just me, but personally, I would never do this process, unless I had console to the appliance. I had same thing happen to me before and thats valuable lesson learnt.
Andy
Thank you Andy, This will be a valuable lesson to me as well!
Let us know how you solve it.
Andy
What is the HW platform used here and were you performing this process from maintenance mode?
VMWare OpenServer, on top of an ESXi Hypervisor. Actual specs unknown to me but the device runs duocore.
I don't have the sk on hand that I've followed for the other 120 of these, but yes all devices were booted in maintenance mode to perform these actions. The problem here was that regular SSH and Console access was not foreseen at the time, but within the scope of work that I was asked to do. So I used the SmartConsole SSH shell to connect to that irregular Gateway. SmartConsole was unstable (was not reported) and went down, thus session lost. Because of that, I received access to the remote console, which are the latter 3 screenshots.
Redeployment was the easy fix as it doesn't take much to provision and stage a virtual appliance. However, I would like to know what I could have done alternatively that would look more kosher.
You should always boot into maintenance mode to use lvm_manager on root !
The problem has been recreated and have additional input that I would like to add. Just incase anyone here knows what to do.
Console output on boot
[Expert@lmao:0]# dmsetup ls
vg_splat-lv_current_snap (253:3)
vg_splat-lv_extended_current (253:5)
vg_splat-lv_current_snap-cow (253:2)
vg_splat-lv_current (253:1)
vg_splat-lv_log (253:4)
vg_splat-lv_current-real (253:0)
# dmsetup ls --tree
vg_splat-lv_current_snap (253:3)
|-vg_splat-lv_current_snap-cow (253:2)
| `- (8:3)
`-vg_splat-lv_current-real (253:0)
`- (8:3)
vg_splat-lv_extended_current (253:5)
`- (8:3)
vg_splat-lv_current (253:1)
`-vg_splat-lv_current-real (253:0)
`- (8:3)
vg_splat-lv_log (253:4)
`- (8:3)
dm setup output on a normal gw
[Expert@lmao-2:0]# dmsetup ls
vg_splat-lv_current (253:2)
vg_splat-lv_log (253:1)
This is the result of LVM_manager being interrupted during the resizing process. Is there a way to manually complete the steps that LVM_Manager usually takes or to "reset" this without impact? I have tried consulting all available documentation that I can find, but with no luck so far.
Thank you for any input you may have!
Followed the suggestion in this thread;
https://community.checkpoint.com/t5/Management/Disk-Extend-Problem/m-p/36747/highlight/true#M7664
lvremove -f /dev/vg_splat/lv_extended_current
This removed the unfinished "new" root, allowed us to run lvm_manager resize again succesfully. Hope it helps someone in the future. Be mindful of the possible impact this may have if not done properly.
Leaderboard
Epsum factorial non deposit quid pro quo hic escorol.
User | Count |
---|---|
14 | |
12 | |
8 | |
6 | |
6 | |
6 | |
5 | |
5 | |
4 | |
3 |
Tue 07 Oct 2025 @ 10:00 AM (CEST)
Cloud Architect Series: AI-Powered API Security with CloudGuard WAFThu 09 Oct 2025 @ 10:00 AM (CEST)
CheckMates Live BeLux: Discover How to Stop Data Leaks in GenAI Tools: Live Demo You Can’t Miss!Thu 09 Oct 2025 @ 10:00 AM (CEST)
CheckMates Live BeLux: Discover How to Stop Data Leaks in GenAI Tools: Live Demo You Can’t Miss!Wed 22 Oct 2025 @ 11:00 AM (EDT)
Firewall Uptime, Reimagined: How AIOps Simplifies Operations and Prevents OutagesAbout CheckMates
Learn Check Point
Advanced Learning
YOU DESERVE THE BEST SECURITY