- Products
- Learn
- Local User Groups
- Partners
- More
MVP 2026: Submissions
Are Now Open!
What's New in R82.10?
Watch NowOverlap in Security Validation
Help us to understand your needs better
CheckMates Go:
Maestro Madness
There exists an issue in GAIA presently where backups just continuously grow and cannot be setup to auto delete. The result is that the /var/log/ directory becomes full and then causes all types of application issues. The following will run every 2 months and auto clean any backups older than the specified number of days (30 days in example below). The reason this method is used is because the normal cron scheduler will not accept the command through webui or cli - therefore this will use the native crontab capabilities of linux to assist in cleaning up older backups.
In Gaia Clish, create a root user without capability to login:
HostName> add user jobuser uid 0 homedir /home/jobuser
HotsName> save config
Note: Avoid giving the user a password and avoid giving the user any Gaia roles as they are not needed.
In Expert mode, create a cron job for this user:
[Expert@HostName]# crontab -u jobuser -e
(This creates and allows you to edit the file /var/spool/cron/jobuser).
Syntax for Entry in crontab:
Minute | Hour | Day of Month | Month | Day of week | Command |
0 | 0 | 1 | 1,3,5,7,9,11 | * | /usr/bin/find /var/log/ -mtime +30 -wholename “/var/log/CPbackup/backups/*.tgz" -delete |
Save the file: shift+: then x!
This will clean up backups older than 30 days @ midnight on the specified day every other month (in this example on the 1st day every two months)
It worked successfully after a slight change to the command
/usr/bin/find /var/log/CPbackup/backups/ -mtime +30 -name "*.tgz" -delete
Worked for me can you provide error you are getting or simply doesn’t clean up as expected??
--Juan
Here is the error:
]# /usr/bin/find /var/log/ -mtime +30 -name "/var/log/CPbackup/backups/*.tgz" -delete
/usr/bin/find: warning: Unix filenames usually don't contain slashes (though pathnames do). That means that '-name /var/log/CPbackup/backups/*.tgz' will probably evaluate to false all the time on this system. You might find the '-wholename' test more useful, or perhaps '-samefile'. Alternatively, if you are using GNU grep, you could use 'find ... -print0 | grep -FzZ /var/log/CPbackup/backups/*.tgz'.
This fixes it – I’ll update
/usr/bin/find /var/log/ -mtime +30 -wholename "/var/log/CPbackup/backups/*.tgz" -delete
That worked!
05 11 * * * /usr/bin/find /var/log/ -wholename "/var/log/CPbackup/backups/*.tgz" -delete
This work, every day delete old tgz file
You do not want to delete all but the newest file, some rotation is advised. In case your OS is corrupted, the latest backup is likely to have those corrupted files, but several previous might not.
Leaderboard
Epsum factorial non deposit quid pro quo hic escorol.
| User | Count |
|---|---|
| 20 | |
| 20 | |
| 16 | |
| 8 | |
| 7 | |
| 3 | |
| 3 | |
| 3 | |
| 3 | |
| 3 |
Fri 12 Dec 2025 @ 10:00 AM (CET)
Check Mates Live Netherlands: #41 AI & Multi Context ProtocolTue 16 Dec 2025 @ 05:00 PM (CET)
Under the Hood: CloudGuard Network Security for Oracle Cloud - Config and Autoscaling!Fri 12 Dec 2025 @ 10:00 AM (CET)
Check Mates Live Netherlands: #41 AI & Multi Context ProtocolTue 16 Dec 2025 @ 05:00 PM (CET)
Under the Hood: CloudGuard Network Security for Oracle Cloud - Config and Autoscaling!Thu 18 Dec 2025 @ 10:00 AM (CET)
Cloud Architect Series - Building a Hybrid Mesh Security Strategy across cloudsAbout CheckMates
Learn Check Point
Advanced Learning
YOU DESERVE THE BEST SECURITY