- Products
- Learn
- Local User Groups
- Partners
- More
Introduction to Lakera:
Securing the AI Frontier!
Quantum Spark Management Unleashed!
Check Point Named Leader
2025 Gartner® Magic Quadrant™ for Hybrid Mesh Firewall
HTTPS Inspection
Help us to understand your needs better
CheckMates Go:
SharePoint CVEs and More!
Dears,
We would like to know whether it is possible or not to change the size of the partitions on a 16000 and a 23800 appliance.
Both 16000 and 23800 appliances have 2 SDD disks of 480 GB, configured in a RAID1.
When we start the installation of Gaia on the 16000 and 23800 appliances, using the R80.40 ISO, we don't have the option to change the size of the partitions. This is very unfortunate when you would like to use VSX.
When the unattended installation is finished, we see:
On the 23800 appliances (VSX/VSLS) we found out in 2019 (the hard way) that 32 GB is not enough to host 25 virtual system; we used lvm_manager to extend lv_current to 64 GB; at the moment 58 GB is consumed.
Our plan is to host 15 virtual systems on the 16000 appliances (VSX/VSLS). Because the lv_log is 292 GB in size, we can't expand the lv_current partition to 64 GB (it is expanded to 48 GB). Taking a snapshot is barely possible. Please note that reducing the size of lv_log is not supported.
Questions:
Notes:
Thank you in advance for your feedback.
Kind regards,
Kris Pellens
I haven't done it on a physical appliance in a while, but you should be able to, with an interactive install, set the partition sizes to your specifications.
Offhand, I don’t know if this can be specified with a non-interactive installation, which I assume was done with a USB drive prepared with isomorphic.
1,they are set by our checkpoint design team per appliance distribution , talking into account usage ( Mgmt/GW) , disk space, max supported memory (as memory effects both swap partition and kernel core dumps size) and other factors , and is hard coded per disk size per appliance .
2,diffrent HW specs (memory support and other aspects ) , it can always be modified via lvm_manager after install (in xfs you can only increases )
3.no it is not supported (only vm'w , open servers , Smart-1 3050 ,Smart-1 3150 ,Smart-5050 ,Smart-1 5150 )
Very interesting discussion.
Please can someone explain the root partition limitation with the virtual-systems.
What is the problem, number of VS, number of interfaces of the VS anything else ?
Is there an official statement available from Check Point ?
Maybee some of the VSX guys here has similar experience. @Kaspars_Zibarts @Maarten_Sjouw @HeikoAnkenbrand @Danny
Wolfgang
Thank you.
On the R80.40 ISO there's indeed a file (appliance_configuration.xml) where per appliance the sizes for the partitions (volumes) for lv_current and lv_log are defined. E.g. for a 16000 T with a RAID 1:
<appliance_partitioning>
<layout min_disksize="150528M" max_disksize="491520M">
<volume name="lv_current">32768M</volume>
<volume name="lv_log">299008M</volume>
<volume name="lv_fcd">8192M</volume>
<volume name="hwdiag">1024M</volume>
<volume name="max_swap">32768M</volume>
</layout>
<layout min_disksize="491520M">
<volume name="lv_current">32768M</volume>
<volume name="lv_log">364544M</volume>
<volume name="lv_fcd">8192M</volume>
<volume name="hwdiag">1024M</volume>
<volume name="max_swap">32768M</volume>
</layout>
</appliance_partitioning>
Because the size of lv_current is only 32 GB (on a 480 GB RAID 1), and because we can't increase the lv_current that much, the number of virtual systems (where you enable some software blades) is very limited, i.e. less than 15. As soon as you increase the size of lv_current, there is no space to create a snapshot anymore.
Any suggestions on how to provision 15 virtual systems on a 48 GB lv_current volume?
Interesting - we never had any issues with root partition on our appliances. This is screenshot from 26000T running R80.30 T219 and 20 VSes
and 23800 running R80.40 T78 with only 6 VSes
Can it be you are writing a lot in your user directories with some scripts? /home/xxx? Place your stuff then in /var/log/ instead
@Kaspars_Zibarts thanks.
The same we can see on all of our customers VSX deployments. never had a problem with this.
@Kris_Pellens you wrote "On the 23800 appliances (VSX/VSLS) we found out in 2019 (the hard way) that 32 GB is not enough to host 25 virtual system; we used lvm_manager to extend lv_current to 64 GB; at the moment 58 GB is consumed."
Which directories are consuming the space? Any statement from TAC or RnD ?
Wolfgang
@Kris_Pellens Looking at your other article (Check Point 23800 appliances / VSX/VSLS / Files in /tmp folder ) about /tmp directory being large - seems like that's your problem. You need to check what's abusing /tmp 🙂 17GB is way too much
Hello Kaspars,
Thank you for your feedback. The /tmp directory requires indeed a cleanup. 😁
May I ask you how many blades you have turned on, on the appliance where you have 20 virtual systems?
On our 23800 appliances, we have 25 virtual systems:
/tmp is 17 GB, /opt is 8.6 GB (see du.txt).
One day we had an outage because lv_current had no space anymore. Then Check Point TAC expanded lv_current from 32 GB to 64 GB.
Our concern: we may face the same in the future on 16000 T appliances.
Kind regards,
Kris Pellens
Hello Wolfgang,
Thank you for your feedback. /tmp is consuming the most. Then /opt. I included the du in the other reply.
Also, the more blades we turn on, the more disk is consumed (from lv_current); obviously.
May I ask you which blades you have turned on, on those virtual systems?
Thank you.
Regards,
Kris
I'm afraid we're only using FW and IA on those. But I'm sure /tmp can be cleaned up, you just need to find what's causing most pain
Did you try addressing this with a TAC case?
ADMIN NOTE: some confidential info has been removed.
Hi Val,
Thank you for your feedback. In March 2019 for the 23800s (VSX with 25 virtual systems): SR #**********. Here TAC resized lv_current from 32 GB to 64 GB. In November 2020 for the 16000s (VSX with 15 virtual systems): SR#***************. Here I decided to increase to 48 GB in order to make sure that one day I don't have to write an outage report to the director. 😁 Because of the resize, soon I will not be able to take snapshots (it's sometime good to have snapshots ... remember Schrödinger's cat).
We have a lot of blades enabled on the virtual systems. On the 23800s with 25 virtual system where lv_current never went under 32 GB anymore.
Today, I received from relevant group manager (SR#**************) the following message:
"********************.
This currently has to be taken up with the local office to understand the best recommended setup for hosting the number of Virtual Systems as the recommendations on the disk space required." In the afternoon I have a discussion with the sales team to see what can be done on the long run.
Many thanks. And also thank you for the great sessions you give.
Kind regards,
Kris Pellens
I will check and get back to you
Dear Val,
Trust you are doing well.
Have you received any feedback from R&D?
We are no longer able to create a snapshot on the 16000T appliances, despite we have plenty of space left on lv_log.
In the end, it boils down that someone at Check Point modifies the appliance partitioning definition, see below.
Your promised feedback is highly appreciated.
Kind regards,
Kris
<appliance_partitioning>
<layout min_disksize="150528M" max_disksize="491520M">
<volume name="lv_current">32768M</volume>
<volume name="lv_log">299008M</volume>
<volume name="lv_fcd">8192M</volume>
<volume name="hwdiag">1024M</volume>
<volume name="max_swap">32768M</volume>
</layout>
<layout min_disksize="491520M">
<volume name="lv_current">32768M</volume>
<volume name="lv_log">364544M</volume>
<volume name="lv_fcd">8192M</volume>
<volume name="hwdiag">1024M</volume>
<volume name="max_swap">32768M</volume>
</layout>
</appliance_partitioning>
There is an internal tread in progress, led by your account manager. I understand the gravity of the situation. Let me check again.
Leaderboard
Epsum factorial non deposit quid pro quo hic escorol.
User | Count |
---|---|
15 | |
12 | |
8 | |
6 | |
6 | |
6 | |
5 | |
5 | |
4 | |
3 |
Tue 30 Sep 2025 @ 08:00 AM (EDT)
Tips and Tricks 2025 #13: Strategic Cyber Assessments: How to Strengthen Your Security PostureTue 07 Oct 2025 @ 10:00 AM (CEST)
Cloud Architect Series: AI-Powered API Security with CloudGuard WAFTue 30 Sep 2025 @ 08:00 AM (EDT)
Tips and Tricks 2025 #13: Strategic Cyber Assessments: How to Strengthen Your Security PostureThu 09 Oct 2025 @ 10:00 AM (CEST)
CheckMates Live BeLux: Discover How to Stop Data Leaks in GenAI Tools: Live Demo You Can’t Miss!Wed 22 Oct 2025 @ 11:00 AM (EDT)
Firewall Uptime, Reimagined: How AIOps Simplifies Operations and Prevents OutagesAbout CheckMates
Learn Check Point
Advanced Learning
YOU DESERVE THE BEST SECURITY