- Products
- Learn
- Local User Groups
- Partners
- More
Quantum Spark Management Unleashed!
Introducing Check Point Quantum Spark 2500:
Smarter Security, Faster Connectivity, and Simpler MSP Management!
Check Point Named Leader
2025 Gartner® Magic Quadrant™ for Hybrid Mesh Firewall
HTTPS Inspection
Help us to understand your needs better
CheckMates Go:
SharePoint CVEs and More!
Here is a small guide on how to add a new disk >2 TB to your firewall and expand the size of /var/log
Check for if we are running a 64 bit kernel (it is needed for handling >2TB disk sizes) |
---|
[Expert@firewall:0]# uname -a Linux firewall 2.6.18-92cpx86_64 #1 SMP Sun Jan 21 10:26:26 IST 2018 x86_64 x86_64 x86_64 GNU/Linux |
List the disks..
List the disk with fdisk -l or parted -l |
---|
[Expert@firewall:0]# parted -l Model: Msft Virtual Disk (scsi) Number Start End Size Type File system Flags Model: Msft Virtual Disk (scsi) Number Start End Size Type File system Flags Model: Msft Virtual Disk (scsi) Number Start End Size File system Name Flags |
Now to the LVM part..
Prepare the new disk to be used in LVM using the parted utility |
---|
[Expert@firewall:0]# parted -s /dev/sdc mklabel gpt [Expert@firewall:0]# parted -s /dev/sdc unit mib mkpart primary 1 100% [Expert@firewall:0]# parted -s /dev/sdc set 1 lvm on [Expert@firewall:0]# # Ask the kernel to re-read the partition table [Expert@firewall:0]# partprobe |
One could skip this step of creating a logical volume and just add the "lvm physical disk" created in the next step, but I do it this way to ensure there is information on the disk (about it is used) so when other sysadmins or tools list the disk they can see the partition on the disk instead of a disk that appears empty .. this might stop them from assume it is "free" to use.
Creating the LVM disk and add it to the existing volume group |
---|
[Expert@firewall:0]# # Tag/prepare/reserve the disk so it can be used in the LVM/VG [Expert@firewall:0]# pvcreate /dev/sdc1 [Expert@firewall:0]# # Then add the new LVM disk to the volume group [Expert@firewall:0]# vgextend vg_splat /dev/sdc1 |
Now I will list the current location of /dev/vg_splat/lv_log (that is where the /var/log file system resides) and see where the data is placed on the two disks I now have in the volume group vg_splat.
My goal is to have the log file system reside on the new disk only and not on the OS disk..
List the current location of the /var/log file system (the lv_log logical volume) |
---|
[Expert@firewall:0]# lvs -o +devices # use "pvdisplay -m" for more detailed view LV VG Attr LSize ... Devices lv_current vg_splat -wi-ao 20.00G /dev/sda3(0) lv_log vg_splat -wi-ao 63.00G /dev/sda3(640) <- Now we want to move this data to sdc1 |
(The above command shows that lv_log resides on the disk sda partition 3 (/dev/sda3) and we want to move it to the new disk called sdc.)
Now lets move the existing /var/log data residing on the same disk as the operative system to speed up the I/O and to ensure we only allocate data for log files on the new disk. We can do this in the background without blocking existing I/O during the move.
I would recommend doing this in the background by adding the extra option "--background". That way you could also just disconnect the secure shell session and not need to wait for the command to finish (it could take hours to finish)
Move the existing log file system from the system disk to the dedicated logfile disk (shown as a forground process) |
---|
[Expert@firewall:0]# # NB: I recommend adding the extra option --background to the below command [Expert@firewall:0]# # Move [FROM disk] [TO disk] [Expert@firewall:0]# pvmove -n /dev/vg_splat/lv_log /dev/sda3 /dev/sdc1 /dev/sda3: Moved: 0.6% /dev/sda3: Moved: 1.4% ... /dev/sda3: Moved: 100% |
Then verify that the data has been moved correctly..
List the location of the logical volumes again on the PV disks. |
---|
[Expert@firewall:0]#lvs -o +devices LV VG Attr LSize Origin Snap% Move Log Copy% Devices lv_current vg_splat -wi-ao 20.00G /dev/sda3(0) lv_log vg_splat -wi-ao 63.00G /dev/sdc1(0) <-- Perfect |
(the above command shows that lv_log only resides on /dev/sdc1 now)
Now I want to expand the file system on THE NEW DISK only.
TIP: |
---|
When you expand a filesystem on a logical volume you can utilize all the free space by using "100%FREE" (without the quotation) instead of my example below where I use "+3910G" |
.. so lets expand the logical volume with /dev/sdc1 as an option.
Extend the log file system to utilize the new space |
---|
[Expert@firewall:0]# lvextend -L +3910G /dev/vg_splat/lv_log /dev/sdc1 Extending logical volume lv_log to 3.88 TB Logical volume lv_log successfully resized |
The we resize the file system to fit the logical volume..
Resizing the file system |
---|
[Expert@firewall:0]# resize2fs /dev/vg_splat/lv_log resize2fs 1.39 (29-May-2006) Filesystem at /dev/vg_splat/lv_log is mounted on /var/log; on-line resizing required Performing an on-line resize of /dev/vg_splat/lv_log to 1041498112 (4k) blocks. The filesystem on /dev/vg_splat/lv_log is now 1041498112 blocks long. |
Check that data still resides on /dev/sdc1 for lv_log
List the LVM / PV location again.. |
---|
[Expert@firewall:0]# lvs -o +devices LV VG Attr LSize Origin Snap% Move Log Copy% Devices lv_current vg_splat -wi-ao 20.00G /dev/sda3(0) lv_log vg_splat -wi-ao 3.88T /dev/sdc1(0) |
An extra check to see the file system size in human format (-h)
Verify that the log file system had been expanded |
---|
[Expert@firewall:0]# df -h /var/log Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_splat-lv_log 3.8T 40G 3.6T 2% /var/log |
An extra check to ensure we can write/read the filesystem..
Verify that the system can write to the file system |
---|
[Expert@firewall:0]# touch /var/log/deleteme && ls -al /var/log/deleteme && rm /var/log/deleteme -rw-rw---- 1 admin users 0 Oct 22 13:42 /var/log/deleteme [Expert@firewall:0]# ls -al /var/log/deleteme ls: /var/log/deleteme: No such file or directory |
That's it
A "Quickie" to run in expert mode |
---|
parted -s /dev/sdc mklabel gpt parted -s /dev/sdc unit mib mkpart primary 1 100% parted -s /dev/sdc set 1 lvm on partprobe pvcreate /dev/sdc1 vgextend vg_splat /dev/sdc1 pvmove --background -n /dev/vg_splat/lv_log /dev/sda3 /dev/sdc1 lvextend -L +3910G /dev/vg_splat/lv_log /dev/sdc1 resize2fs /dev/vg_splat/lv_log |
The end
Just a question - why not use lvm_manager
? Also, sk105065 gives a slightly different procedure...
I can confim Keld Norman solution are working because he reconfigurered our splat-partition in that way on our system.
First I wasnt aware of the SK so on my virtual machine I extended the hdd of vmdk file BUT Gaia did not see the extended hdd part. If I wanted to do so I had to add a new hdd in vm and then via lvm manager add the new storage to Gaia. So Keld moved around some partition that was blocking for a normal partition and extended the existing Gaia and the splat partition.
I think the SK is the recommend users to easily add storage whitout risk of destroying your installation, and then Check Point would have some unsatisfying customers.
BR
Kim
because i read here : lvm_manager successor on R80.10 that lvm_manager was not supported anymore.
If you read more, you will find that it is still supported - but you have to enter maintenance mode before running the script.
SK95566 now clearly says:
- Perform a backup/snapshot before proceeding!!!
- Reboot and perform the operation in Maintenance Mode only!!!
Very nicely written, we did a while ago. Just remember that it took forever / can't remember which command but it was 4hrs I think
Nice!
Yes, it is very nicely written and helpful. I gave you some points for it.
THX
Heiko
Just used this very nice guide, until the part with "pvmove" , then I used lvm_manager to expand the lv_log dir.
It worked like a charm !
This was done on CP R80.20 manager
how does the new XFS filesystem complicate this with R80.20 and beyond?
does XFS work well with VLM? Is the specific filesystem irrelevant when talking about LVM?
thanks -GA
nudge on topic and whether XFS changes anything in this dialog (part of GAIA kernel 3.xx)?
Leaderboard
Epsum factorial non deposit quid pro quo hic escorol.
User | Count |
---|---|
17 | |
6 | |
4 | |
4 | |
4 | |
4 | |
2 | |
2 | |
2 | |
2 |
Wed 03 Sep 2025 @ 11:00 AM (SGT)
Deep Dive APAC: Troubleshooting 101 for Quantum Security GatewaysThu 04 Sep 2025 @ 10:00 AM (CEST)
CheckMates Live BeLux: External Risk Management for DummiesWed 10 Sep 2025 @ 11:00 AM (CEST)
Effortless Web Application & API Security with AI-Powered WAF, an intro to CloudGuard WAFWed 10 Sep 2025 @ 11:00 AM (EDT)
Quantum Spark Management Unleashed: Hands-On TechTalk for MSPs Managing SMB NetworksWed 03 Sep 2025 @ 11:00 AM (SGT)
Deep Dive APAC: Troubleshooting 101 for Quantum Security GatewaysThu 04 Sep 2025 @ 10:00 AM (CEST)
CheckMates Live BeLux: External Risk Management for DummiesWed 10 Sep 2025 @ 11:00 AM (EDT)
Quantum Spark Management Unleashed: Hands-On TechTalk for MSPs Managing SMB NetworksAbout CheckMates
Learn Check Point
Advanced Learning
YOU DESERVE THE BEST SECURITY