- Products
- Learn
- Local User Groups
- Partners
- More
MVP 2026: Submissions
Are Now Open!
What's New in R82.10?
10 December @ 5pm CET / 11am ET
Announcing Quantum R82.10!
Learn MoreOverlap in Security Validation
Help us to understand your needs better
CheckMates Go:
Maestro Madness
Gentlemen... people... I hope you are all well?!? I would like to share a situation that has been happening to someone where the active member of the r81.10 take 172 cluster,appliance series 7000 gradually fills the /var/log partition of the fw gw until it is almost full and when it is full, it starts deleting and freeing up the partition. I have only been following all hours this behavior and it soon returns to normal and goes months without happening. Have you ever experienced this? Any experience in this regard?
tks rodrigo
It sounds like it's logging locally, is it having trouble communicating with its configured log servers?
What does this command say?
cpstat mg -f log_server // on SmartCenter
cpstat fw -f log_connection // on gateway
BR
Akos
Hey Rodrigo,
Maybe there is cron job you might not be aware of possibly? I had seen that dir fill up, but then most people just delete whatever is not needed.
Example, say you wish to look for files bigger than 500 MBs, you would run -> find /var/log -size +500M
Best,
Andy
One thing I thought of as well is check to make sure maybe there are no files from before that are "lingering" in that dir that could be deleted.
Andy
It sounds like it's logging locally, is it having trouble communicating with its configured log servers?
What does this command say?
cpstat mg -f log_server // on SmartCenter
cpstat fw -f log_connection // on gateway
BR
Akos
Hello.. How are you?!? The output of the command in fw (cpstat fw -f log_connection)
is informing that one of the logservers is unavailable and is saving locally (the VM is actually turned off and being migrated to another environment) and I had not removed the logserver from the log sending configurations of the clusters. I removed it now and just left the logserver active until the migration is finished and I will follow up on the case.
For now, thank you very much.
Leaderboard
Epsum factorial non deposit quid pro quo hic escorol.
| User | Count |
|---|---|
| 27 | |
| 20 | |
| 16 | |
| 5 | |
| 5 | |
| 4 | |
| 4 | |
| 4 | |
| 3 | |
| 3 |
Fri 12 Dec 2025 @ 10:00 AM (CET)
Check Mates Live Netherlands: #41 AI & Multi Context ProtocolTue 16 Dec 2025 @ 05:00 PM (CET)
Under the Hood: CloudGuard Network Security for Oracle Cloud - Config and Autoscaling!Fri 12 Dec 2025 @ 10:00 AM (CET)
Check Mates Live Netherlands: #41 AI & Multi Context ProtocolTue 16 Dec 2025 @ 05:00 PM (CET)
Under the Hood: CloudGuard Network Security for Oracle Cloud - Config and Autoscaling!Thu 18 Dec 2025 @ 10:00 AM (CET)
Cloud Architect Series - Building a Hybrid Mesh Security Strategy across cloudsAbout CheckMates
Learn Check Point
Advanced Learning
YOU DESERVE THE BEST SECURITY