Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Prime
Contributor
Jump to solution

checkpoint management server backup failed. Getting error "No space left on Device".

Logs:
0;admin@CB-DR-CP-MGMT-SRV:/opt/CPsuite-R80.40/fw1/bin/upgrade_tools[Expert@CB-DR-CP-MGMT-SRV ]# ./migrate export 25102022.tgz
 Can opening TdError log file /opt/CPshrd-R80.40/log/migrate-2022.10.25_13.16.52.log: No space left on device
[25 Oct 13:16:52] [SetupLogging] WRN: Failed to initialize logging, continuing logging to the screen

 

CB-DR-CP-MGMT-SRV> expert
Enter expert password:


Warning! All configurations should be done through clish
You are in expert mode now.

]0;admin@CB-DR-CP-MGMT-SRV:~[Expert@CB-DR-CP-MGMT-SRV:0]# df -h

Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_splat-lv_current 100G 30G 71G 30% /
/dev/sda1 291M 43M 234M 16% /boot
tmpfs 31G 4.1M 31G 1% /dev/shm
/dev/mapper/vg_splat-lv_log 4.0T 2.2T 1.8T 56% /var/log
]0;admin@CB-DR-CP-MGMT-SRV:~[Expert@CB-DR-CP-MGMT-SRV:0]# exit
exit
CB-DR-CP-MGMT-SRV> exit

0 Kudos
2 Solutions

Accepted Solutions
Timothy_Hall
Champion
Champion

 ./migrate export /var/log/25102022.tgz

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com

View solution in original post

Prime
Contributor

Thanks for your suggestions guys.This is how we t-shooted the isue.

>sk95487 checked with "df -i" that the inodes were utilized with 100% usage for lv_log partition.

>Killed older processes which acquired some space and inodes but that did not resolve the issue.

>Referred sk171686, sk36754 and sk171685 but it did not help.

>Checked multiple directories and in $FWDIR/log/blob directory, we found multiple directories which contained unneccessary blob files which were consuming all the Inodes of the lv_log partition.

>deleted unneccessary blob files listed in the directories in $FWDIR/log/blob

>checked as per sk118294 we can remove the directories present in $FWDIR/log/blob.

>removed the directories which acquire the maximum space in $FWDIR/log/blob and watched the "df -i" on a duplicate SSH session of management server to monitor the usage of Inodes.

>after deleting the unneccessary blob files and reducing the Inodes usage, we ran the filesystem check for hard drive integrity.

Note:-i node range should be 1-4%

View solution in original post

0 Kudos
6 Replies
_Val_
Admin
Admin

Please follow sk95487 and tell us if it helps

Timothy_Hall
Champion
Champion

 ./migrate export /var/log/25102022.tgz

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
PhoneBoy
Admin
Admin

A meta question: why are you using the legacy migration tools?
You should be using the newer tools, documented here: https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails=&solut... 

0 Kudos
Dario_Perez
Employee Employee
Employee

check sk166833

0 Kudos
Dov_Fraivert
Employee
Employee

Hi @Prime 
It seems that the message from log that you shared is related to another problem.
Regarding "backup failed", can you send me /var/log/CPbackup.elg.

Thanks,
Dov

0 Kudos
Prime
Contributor

Thanks for your suggestions guys.This is how we t-shooted the isue.

>sk95487 checked with "df -i" that the inodes were utilized with 100% usage for lv_log partition.

>Killed older processes which acquired some space and inodes but that did not resolve the issue.

>Referred sk171686, sk36754 and sk171685 but it did not help.

>Checked multiple directories and in $FWDIR/log/blob directory, we found multiple directories which contained unneccessary blob files which were consuming all the Inodes of the lv_log partition.

>deleted unneccessary blob files listed in the directories in $FWDIR/log/blob

>checked as per sk118294 we can remove the directories present in $FWDIR/log/blob.

>removed the directories which acquire the maximum space in $FWDIR/log/blob and watched the "df -i" on a duplicate SSH session of management server to monitor the usage of Inodes.

>after deleting the unneccessary blob files and reducing the Inodes usage, we ran the filesystem check for hard drive integrity.

Note:-i node range should be 1-4%

0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events