Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
dehaasm
Collaborator

upgrade_reserved not enough space for upgrade

LVM overview
============
                         Size(GB)   Used(GB)   Configurable    Description
    lv_AutoSnap13798     30         30         no              Snapshot volume
    lv_current           30         17         yes             Check Point OS and products
    lv_log               1005       75         yes             Logs volume
    upgrade reserved     0          N/A        no              Not enough space for upgrade
    swap                 32         N/A        no              Swap memory volume
    unallocated space    3          N/A        no              Unused space
    -------              ----
    total                1100       N/A        no              Total size

press ENTER to continue.

 

We have a smartevent server which was recently upgraded from R81.20 to R82 which now has the upgrade reserved partition full and has not enough space for future upgrades, this partition is not configurable, how can we clear this partition?

14 Replies
_Val_
Admin
Admin

Try removing some snapshots and then see if the upgrade will work. Your issue is that the snapshot partition is fully utilized.

0 Kudos
Bob_Zimmerman
MVP Gold
MVP Gold

Oof. I made this mistake when building my first Gaia systems. You'll have to rebuild the system at some point and leave more space unallocated. Snapshots and upgrades both pull from unallocated space, which isn't really made clear when installing.

You should set lv_current to be whatever size you need (30 GB looks fine here, but bigger environments may need 50+ GB), then leave at least four times that space unallocated. With your current drive, I would do 30 GB to lv_current, 800 GB to lv_log. With 32 in swap, that should leave 238 GB free. Most snapshots are about as big as the space you're actually using in lv_current. Upgrades unpack to a new logical volume, then move your old lv_current to lv_AutoSnap####, so those take up as much space as lv_current.

0 Kudos
Vincent_Bacher
Advisor
Advisor

I had this issue in the past as well and on a vCenter (VMware) I just added another disk and extended lv_current.

and now to something completely different - CCVS, CCAS, CCTE, CCCS, CCSM elite
Bob_Zimmerman
MVP Gold
MVP Gold

The problem with that is when you try to upgrade, you'll get the "Your partitions are not in Check Point standard format" failure, and you'll have to rebuild anyway. It's a good option for dealing with a current crisis, since it lets you defer the rebuild, but I've never seen it let someone avoid the rebuild entirely.

Duane_Toler
MVP Silver
MVP Silver

Ohhh ok, so I wasn't the only one! 😆  Yeah so many #LessonsLearned in those first few Gaia years!  Nowadays I make sure to have enough unused space in the volume group that is at least 2x the size of lv_current!  I prefer 3x, tho.  Plus, I don't understand what the weirdo obsession is in the DDR rules for having extra PV disks in the volume group.  That obsession defeats the purpose of using LVM entirely!  (you know... the "L" in LVM ...).  R&D has some odd opinions sometimes.

I got that "partitions not in Check Point format" error on a recent R81.20 -> R82 upgrade for CloudGuard because a management VM had multiple physical disks.  Since the R80.30 or R80.40 misaligned partition bonanza, I've had to do rebuilds on every management server to get to R81.20.  I experimented with one server and forced the R80.40 -> R81.20 upgrade and it (as expected) failed painfully.  Ugh.  Not looking forward to rebuilding some of these for R82. 😕

 

--
Ansible for Check Point APIs series: https://www.youtube.com/@EdgeCaseScenario and Substack
0 Kudos
Bob_Zimmerman
MVP Gold
MVP Gold

Yeah, with no explanation that Gaia was actually using LVM internally, and why some space should be left unused, it looked like the system was just telling you that you're wasting a ton of space. Another reason I wish Gaia had been based on IPSO (and therefore FreeBSD). At the time, FreeBSD had been using ZFS for years. ZFS snapshots are instant and free, and ZFS filesystems can be resized up or down with no loss of data.

Oh well! What might have been.

Maybe btrfs will be stable for more than a single mirror by R85, and we'll get a modern filesystem. Of course, we'll probably get saddled with systemd before then.

the_rock
MVP Platinum
MVP Platinum

Yea, definitely check for generated snapshots.

Best,
Andy
0 Kudos
emmap
MVP Gold CHKP MVP Gold CHKP
MVP Gold CHKP

The lv_AutoSnap13798 is the old root partition, currently taking the space that was upgrade_reserved before you did the upgrade. Remove it and you will have your ugprade_reserved space back.

dehaasm
Collaborator

so i should remove lv_AutoSnap13798 in lvm_manager right?

the_rock
MVP Platinum
MVP Platinum

You should, yes.

Best,
Andy
0 Kudos
Bob_Zimmerman
MVP Gold
MVP Gold

No, you should remove it in clish using 'delete snapshot'.

That will free up the 32 GB which it consumes. That space should go into upgrade_reserved, so it won't be available for normal snapshots.

I still highly recommend rebuilding your management when you get a chance. Better to get ahead of it before you find upgrading to R82.20 or whatever takes more space.

emmap
MVP Gold CHKP MVP Gold CHKP
MVP Gold CHKP

As Bob said, either use 'delete snapshot' in clish or delete in the WebUI, under Snapshots. Deleting the partition will not remove the other links to it in other places and will cause you problems. 

0 Kudos
the_rock
MVP Platinum
MVP Platinum

Hey mate,

Did you get this sorted out?

Best,

Andy

Best,
Andy
0 Kudos
Lesley
MVP Gold
MVP Gold

what has been posted before, if this is virtual system add more disk space and use lvm_manager in maintenance mode. Over the years systems are older and more storage is taken. Especially if you do inplace upgrade after inplace upgrade. 

also soon, or already now you are unable to create cpinfo. cpinfo has to be created in in dir / (lv_current)

also add space there, you can also consider take space from lv_log, 1tb of logging depending on setup could be to much (how many months you want to go back)

 

-------
Please press "Accept as Solution" if my post solved it 🙂
0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events