cancel
Showing results for 
Search instead for 
Did you mean: 
Create a Post
Highlighted
Blason_R
Silver

Failed to upgrade R77.30 to R80.30 on Smart-1 210 appliance

Jump to solution

Hi Guys,

Sometime back I tried upgrading Smart-1 210 from R77.30 to R8.10. PUV showed no warning.

However, it failed at POST upgrade and here are the logs I am seeing.

Return code = 0
Output =
[2019-11-05 - 17:57:01][3233 7021]:about to copy file /etc/fstab to /etc/fstab.orig
[2019-11-05 - 17:57:06][3233 7021]:/bin/dbset :save command summary:
Return code = 0
Output =
[2019-11-05 - 17:57:06][3233 7021]:About to execute command: /bin/bash -x /mnt/fcd/post.sh upgrade /mnt/fcd >> /var/log/install_Major_R80.30_Mgmt_T200_1_detailed.log 2>&1
[2019-11-05 - 17:57:06][3233 7021]:Command /bin/bash -x /mnt/fcd/post.sh upgrade /mnt/fcd >> /var/log/install_Major_R80.30_Mgmt_T200_1_detailed.log 2>&1 execution failed, exit code=1
[2019-11-05 - 17:57:06][3233 7021]:Failed on Major_Post_Install_Script
[2019-11-05 - 17:57:06][3233 7021]:About to execute command: /bin/mount | /bin/grep -w "/mnt/fcd" | awk '{print $1}'
[2019-11-05 - 17:57:06][3233 7021]:/bin/mount | /bin/grep -w "/mnt/fcd" | awk '{print $1}' command summary:
Return code = 0
Output = /dev/mapper/vg_splat-lv_fcd_new
[2019-11-05 - 17:57:06][3233 7021]:About to execute command: /bin/umount -l /mnt/fcd
[2019-11-05 - 17:57:06][3233 7021]:/bin/umount -l /mnt/fcd command summary:
Return code = 0
Output =
[2019-11-05 - 17:57:06][3233 7021]:About to execute command: /usr/sbin/lvremove -fvv /dev/mapper/vg_splat-lv_fcd_new
[2019-11-05 - 17:57:06][3233 7021]:Command /usr/sbin/lvremove -fvv /dev/mapper/vg_splat-lv_fcd_new execution failed, exit code=5
[2019-11-05 - 17:57:06][3233 7021]:Command output: Setting global/locking_type to 1
File-based locking selected.
Setting global/locking_dir to /var/lock/lvm
Using logical volume(s) on command line
Locking /var/lock/lvm/V_vg_splat WB
/dev/ramdisk: No label detected
/dev/sda: size is 3907029168 sectors
/dev/md0: size is 0 sectors
/dev/vg_splat/lv_fcd_GAIA: No label detected
/dev/ram: No label detected
/dev/sda1: No label detected
/dev/vg_splat/lv_fcd_NGSE: No label detected
/dev/ram2: No label detected
/dev/sda2: No label detected
/dev/vg_splat/lv_fcd_R75.47sg: No label detected
/dev/ram3: No label detected
/dev/sda3: lvm2 label detected
/dev/vg_splat/hwdiag: No label detected
/dev/ram4: No label detected
/dev/vg_splat/lv_log: No label detected
/dev/ram5: No label detected
/dev/root: No label detected
/dev/ram6: No label detected
/dev/vg_splat/lv_SNAPSHOT_8APR17: No label detected
/dev/ram7: No label detected
/dev/vg_splat/lv_B4R8030: No label detected
/dev/ram8: No label detected
/dev/vg_splat/lv_fcd_new: No label detected
/dev/ram9: No label detected
/dev/ram10: No label detected
/dev/ram11: No label detected
/dev/ram12: No label detected
/dev/ram13: No label detected
/dev/ram14: No label detected
/dev/ram15: No label detected
/dev/sda3: lvm2 label detected
/dev/sda3: lvm2 label detected
Can't remove open logical volume "lv_fcd_new"
Unlocking /var/lock/lvm/V_vg_splat

[2019-11-05 - 17:57:06][3233 7021]:About to execute command: lsof | grep $( dmsetup info -c | awk '/vg_splat-lv_fcd_new/ {printf("%d,%d\n",$2,$3)}') | awk '{print($2)}' | sort -u | while read pid; do kill -9 $pid; done
[2019-11-05 - 17:57:17][3233 7021]:About to execute command: /usr/sbin/lvremove -fvv /dev/mapper/vg_splat-lv_fcd_new
[2019-11-05 - 17:57:17][3233 7021]:Command /usr/sbin/lvremove -fvv /dev/mapper/vg_splat-lv_fcd_new execution failed, exit code=5
[2019-11-05 - 17:57:17][3233 7021]:Command output: Setting global/locking_type to 1
File-based locking selected.
Setting global/locking_dir to /var/lock/lvm
Using logical volume(s) on command line
Locking /var/lock/lvm/V_vg_splat WB
/dev/ramdisk: No label detected
/dev/sda: size is 3907029168 sectors
/dev/md0: size is 0 sectors
/dev/vg_splat/lv_fcd_GAIA: No label detected
/dev/ram: No label detected
/dev/sda1: No label detected
/dev/vg_splat/lv_fcd_NGSE: No label detected
/dev/ram2: No label detected
/dev/sda2: No label detected
/dev/vg_splat/lv_fcd_R75.47sg: No label detected
/dev/ram3: No label detected
/dev/sda3: lvm2 label detected
/dev/vg_splat/hwdiag: No label detected
/dev/ram4: No label detected
/dev/vg_splat/lv_log: No label detected
/dev/ram5: No label detected
/dev/root: No label detected
/dev/ram6: No label detected
/dev/vg_splat/lv_SNAPSHOT_8APR17: No label detected
/dev/ram7: No label detected
/dev/vg_splat/lv_B4R8030: No label detected
/dev/ram8: No label detected
/dev/vg_splat/lv_fcd_new: No label detected
/dev/ram9: No label detected
/dev/ram10: No label detected
/dev/ram11: No label detected
/dev/ram12: No label detected
/dev/ram13: No label detected
/dev/ram14: No label detected
/dev/ram15: No label detected
/dev/sda3: lvm2 label detected
/dev/sda3: lvm2 label detected
Can't remove open logical volume "lv_fcd_new"
Unlocking /var/lock/lvm/V_vg_splat

[2019-11-05 - 17:57:17][3233 7021]:volume cleanup failed.
[2019-11-05 - 17:57:17][3233 7021]:remaining open files:
[2019-11-05 - 17:57:17][3233 7021]:failed to remove partition

0 Kudos
1 Solution

Accepted Solutions
Employee+
Employee+

Re: Failed to upgrade R77.30 to R80.30 on Smart-1 210 appliance

Jump to solution

Hi

After checking the file: /var/log/install_Major_R80.30_Mgmt_T200_1_detailed.log

I found the below line which is the root cause of the failure:

+ echo 'Admin user is disabled on the machine. Exiting'

In order to complete the upgrade please enable the admin user (it can be disabled later).

I'm working on dismissing this limitation and first of all - providing better information about the failure reason.

Sorry for the inconvenience

Boaz

0 Kudos
6 Replies
Admin
Admin

Re: Failed to upgrade R77.30 to R80.30 on Smart-1 210 appliance

Jump to solution
Looks like it failed to make a snapshot, which is required to do a CPUSE upgrade.
How much disk space do you have available for snapshots?
0 Kudos
Blason_R
Silver

Re: Failed to upgrade R77.30 to R80.30 on Smart-1 210 appliance

Jump to solution

Hi

Around 41 GB available for snapshot and I guess this should be enough?

0 Kudos
Employee++
Employee++

Re: Failed to upgrade R77.30 to R80.30 on Smart-1 210 appliance

Jump to solution

Hi

Can you please send me the file /var/log/install_Major_R80.30_Mgmt_T200_1_detailed.log to tfridman@checkpoint.com

Thanks

Tal

0 Kudos
Blason_R
Silver

Re: Failed to upgrade R77.30 to R80.30 on Smart-1 210 appliance

Jump to solution

H

ello,

Thanks for the reply and have shared in a separate mail.

 

0 Kudos
Employee+
Employee+

Re: Failed to upgrade R77.30 to R80.30 on Smart-1 210 appliance

Jump to solution

Hi

  Will appreciate if you run "da cli collect_logs" and send me the resulted tgz (boazo@checkpoint.com)

  Since the failure occurred in the post actions, the new partition was already created so i doubt it's a disk space issue.

  However, for the logs you shared it is difficult to know the first failure point (later the lvremove fails and some other things)

R&D Deployment Team Leader

Boaz

0 Kudos
Employee+
Employee+

Re: Failed to upgrade R77.30 to R80.30 on Smart-1 210 appliance

Jump to solution

Hi

After checking the file: /var/log/install_Major_R80.30_Mgmt_T200_1_detailed.log

I found the below line which is the root cause of the failure:

+ echo 'Admin user is disabled on the machine. Exiting'

In order to complete the upgrade please enable the admin user (it can be disabled later).

I'm working on dismissing this limitation and first of all - providing better information about the failure reason.

Sorry for the inconvenience

Boaz

0 Kudos