Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Thomas_Carl_Pet
Contributor

Resize disk on XFS (R81.10)

We have a VSLS cluster with 4 28000 appliances. After installation and having them in production we are noticing that we are running out of disk at lv_current. What are the recommended procedure to resize since I'm not sure that I can do it with lvm_manager since we do not have anny un-allocated (free) space.

 

Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_splat-lv_current 32G 30G 2.9G 92% /
/dev/md0 291M 100M 177M 37% /boot
tmpfs 63G 275M 63G 1% /dev/shm
/dev/mapper/vg_splat-lv_log 292G 50G 243G 17% /var/log
cgroup 63G 0 63G 0% /sys/fs/cgroup

LVM overview
============
Size(GB) Used(GB) Configurable Description
hwdiag 1 1 no Snapshot volume
lv_AutoSnap262500 32 32 no Snapshot volume
lv_current 32 30 yes Check Point OS and products
lv_fcd_GAIA 8 8 no Factory defaults volume
lv_fcd_R80.40 8 8 no Factory defaults volume
lv_log 292 50 yes Logs volume
upgrade reserved 36 N/A no Reserved space for version upgrade
swap 32 N/A no Swap memory volume
unallocated space 5 N/A no Unused space
------- ----
total 446 N/A no Total size

 

0 Kudos
21 Replies
G_W_Albrecht
Legend Legend
Legend

I find it strange that the 480GB HD is partitioned with 32GB root and 292GB log partition - who needs that much log space on a GW ? Logs take space on SMS, but usually not on the GWs. I would do a reformat and install as this does not look good at all. But which CP Version is currently running here and how did the partitioning happen ?

CCSP - CCSE / CCTE / CTPS / CCME / CCSM Elite / SMB Specialist
0 Kudos
Thomas_Carl_Pet
Contributor

This is the pre-defined partitionng on a CheckPoint 2800 appliance with R81.10

G_W_Albrecht
Legend Legend
Legend

I would suggest to open a SR# with TAC - for me it looks like this configuration is correct although i do not understand it. /var/log has to have space for core dumps as well, so the space is needed. No idea why Factory Default, Snapshot, Swap and Upgrade have no space on partition.

CCSP - CCSE / CCTE / CTPS / CCME / CCSM Elite / SMB Specialist
Thomas_Carl_Pet
Contributor

Just to be sure.
So you do not think I could resize the lv_current (which now has around 2 GB free space)? And that it should be as pe-defined if TAC doesn't tell me something else

0 Kudos
the_rock
Legend
Legend

You should be able to use built-in lvm_manager to resize the partitions.

0 Kudos
Thomas_Carl_Pet
Contributor

Thanks. I will try.

I was just worried because  we do not have any un-allocated volumes.

the_rock
Legend
Legend

Thats fine. As long as you run it and see it can be resized, you are good to go.

Select action:

1) View LVM storage overview
2) Resize lv_current/lv_log Logical Volume
3) Quit
Select action:

 

You may get this when choosing option 2:

Resizing logical volumes is supported in maintenance mode only.
Please boot in maintenance mode and re-run lvm_manager to resize the logical volume.

press ENTER to continue.

To get around it, you can download script I attached and run it from the fw directly. Just do chmod 777 lvm_manager.sh and then simply do ./lvm_manager.sh

Andy

0 Kudos
Thomas_Carl_Pet
Contributor

Great.
I'll update this tomorrow

Thomas_Carl_Pet
Contributor

This was not working. Since we do not have any free disk space (due to predefined configuration on 28000 appliances). At the moment struggling with TAC to give me some advice

0 Kudos
the_rock
Legend
Legend

Can you send a screenshot of what you get when you run lvm_manager and select 2 to resize?

Andy

0 Kudos
Thomas_Carl_Pet
Contributor

Here it is

0 Kudos
the_rock
Legend
Legend

Ok, got it. Usually, if you are adding HDD to say VM, you would follow below:

How to add hardware resources, such as log storage, to a Virtual Machine running Gaia OS (checkpoint...

Since yours is physical appliance, just wondering, can you verify in web UI is any snapshots/backups can be deleted?

Andy

0 Kudos
Thomas_Carl_Pet
Contributor

It is VSX so this is not possible. But from clish I can tell you that no snapshots or backups are taking up space. 

And I have done all the cleaning possible. The issue is that since this is VSX the VS's itself takes up space.

 

But a Datacenter firewall cofiguration (28000 appliance is for datacenters) should be able to handle this.

And we are not even able to add another disk since it is a RAID-1 solution with only 2 storege slots.

 

In a Linux world it is possible to decrease a XFS filesystem. But I do not know why CheckPoint has defined so little space for lv_current and so much space for lv_log (on a gateway).

0 Kudos
the_rock
Legend
Legend

I hear ya, yea, thats definitely puzzling to me as well. Maybe worth checking with TAC to see what your options are.

0 Kudos
Thomas_Carl_Pet
Contributor

I've been in dialogue with them all day. At the end they wanted this to be solved (hopefully) by RnD.

the_rock
Legend
Legend

Please let us know how it gets solved. In the spirit of the community, we always love when people share the solutions to their problems, so it helps others.

0 Kudos
Thomas_Carl_Pet
Contributor

I will aboslutelly keep you updated

Thomas_Carl_Pet
Contributor

I have had meetings with CheckPoint RnD. We will have 3 options.

1) Receive a new predefined configuration file for logical volumes and implement this during a reinstallation. We then need this from CheckPoint and I do not know how long it will take before receiving it.

2) Decrease lv_log with lvdecrease and add the space with lv_extend to lv_current. This steps has not been tested a lot at CheckPoint so they will try to test it in lab with a 28000 appliance.

3) Remove lv_autosnap volume and add this to lv_current. Then we will not have the possibility to do snapshots in the future. Since this is a VSX environment then snapshot is not absolutelly required, since most of the configuration is in the CMA.

We will have more meetings to decide how to proceed. But at least there are some options now.

 

 

 

the_rock
Legend
Legend

Sounds like option 2 might be best, but maybe better to wait for R&D to do the testing.

0 Kudos
Thomas_Carl_Pet
Contributor

Yes. For me option 1 or 2 are preferable but we will wait for some additonal meetings with CheckPoint RnD

 

0 Kudos
Thomas_Carl_Pet
Contributor

Sorry 28000 appliance

0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events