Trying to upgrade some 15600's tonight from 80.10 JHF T249 to 80.30 JHF T226 using the Blink package.
The upgrade will run all the way up to 100% and then will fail with the error:
Result: Upgrade of package Blink_image_1.1_Check_Point_R80.30_T200_JHF_T226_SecurityGateway.tgz Failed
Failed to perform post install actions.
CPUSE cannot extract the installation files. Make sure there is enough free disk space and try again. The package may be corrupted. Try to re-download the package.
This has happened on both members of the cluster. Verifier passed on both gateways after the download.
Checking disk space in lvm_manager confirms there is plenty of free and unpartitioned space on the gateway. To the tune of about 600GB.
So, I tried downloading and installing the normal non-Blink image for the 80.30 upgrade. It too failed with the following:
Result: Upgrade of package Check_Point_R80.30_T200_Fresh_Install_and_Upgrade_Security_Gateway.tgz Failed
Cannot detach the partition. Make sure no process is active on the partition.
Contact Check Point Technical Services for further assistance.
Talking to Diamond TAC they found an instance where a reboot fixed the issue.
Sure enough, after the reboot, the first gateway to get upgraded finally got through the 80.30 upgrade. I was then able to install JHF T226.
Interesting behavior here too. No status updates or anything. Just:
Info: Initiating upgrade of Check_Point_R80.30_T200_Fresh_Install_and_Upgrade_Security_Gateway.tgz...
Interactive mode is enabled. Press CTRL + C to exit (this will not stop the operation)
Result: Did not find any new packages
Inside about 3 minutes the gateway just reboots its merry way into 80.30.
Anyone seen this behavior?
Also saw some interesting display behavior in CPUSE (v1999) - after the Blink image failed, doing a installer download no longer showed the 80.30 + JHF T226 as an available download. Just R80.30 Security Gateway for appliances and Open server.
On another note, someone needs to update the upgrade guide to note that when you push <Name_of_VSX_Cluster_object>_VSX policy that the MDS will push all VSX instance policies regardless of which CMA they're in.
The document leaves that note out and it is super scary not knowing whether or not all of the policies got pushed.
Thankfully TAC introduced me to vsx stat -v which confirmed the policy had indeed been installed. But it would be nice if the document noted that. Fired up a connectivity upgrade and everything faulted nicely.