Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
kamaladmire1
Participant
Participant

CloudGuard Network Security Upgrade

Hi Check mates

I am looking to Upgrade Cloud Guard R80.40 to R81.20

if you can help with, the current R80.40 deployment we can upgrade to R81.10 but not R81.20 pre verification checks failed with error below:

Error:

kamaladmire1_2-1726745053582.png

 

Gateway disk:

Parted -l

kamaladmire1_5-1726745273821.png

Fdisk -l

kamaladmire1_6-1726745273845.png

Image use for upgrade:

aio_Check_Point_ivory_main_T634__R81.20_Gaia_3_10_Install_and_Upgrade.tar

 

current R80.40 has file system ntfs!!!

 

Is there a way to upgrade directly to R81.20 without Parallel deployment??

 

0 Kudos
8 Replies
AkosBakos
Advisor

Hi @kamaladmire1,

The Cloudguard GW's upgrade are totally different than the simple cluster

Here is the guide:

https://sc1.checkpoint.com/documents/IaaS/WebAdminGuides/EN/CP_CloudGuard_Network_for_Azure_HA_Clust...

Read it carefully, and if you are not familiar with AZURE, ask for help.

Akos

----------------
\m/_(>_<)_\m/
0 Kudos
kamaladmire1
Participant
Participant

Thanks Akos, what you ae referring is I have already looked at, however in this particular scenario the issue is to do with disk partition being NTFS, as If  I am on R81.10 and have  Linux-Swap I can upload the R81.20 image and do in-place upgrade (see below screenshot from R81.10)

kamaladmire1_1-1726746516752.png

 

 

0 Kudos
AkosBakos
Advisor

Hi @kamaladmire1 

I wouldn't bother with partitioning.

I would install a new R81.20, and I redirect the traffic in maintanace window from R81.10 ->r81.20

Or I misunderstood something?

Akos

----------------
\m/_(>_<)_\m/
0 Kudos
kamaladmire1
Participant
Participant

well that's the last option, I would prefer to do in-place upgrade if possible, that's what I am looking here...is anyone having this issue when disk being ntfs???

0 Kudos
AkosBakos
Advisor

Hi @kamaladmire1 

Because you will deploy the new R81.20 gateway from AZURE, so it must work without any problem. From this point the partitioning will be OK, and you got a new GW.

I did this steps last time, and it worked for me.

Akos

----------------
\m/_(>_<)_\m/
0 Kudos
AkosBakos
Advisor

Hi,

The second step is this:

Deploy a new Check Point CloudGuard Network High Availability in the needed version.
You need to deploy it to the same subnets as in the existing CloudGuard Network High Availability solution.

In this case you will deploy an R81.20 gateway

----------------
\m/_(>_<)_\m/
0 Kudos
Duane_Toler
Advisor

Yes, what @PhoneBoy says is correct; CloudGuard for R80.40 has an incorrect partition layout for Azure.  You cannot do in-place upgrade from R80.40 to anything higher.  I have numerous CloudGuard gateways I manage for customers and I've tried to work around this, but it fails.  You must deploy new gateways from the marketplace into a new resource group.  If you have a frontend load-balancer configuration in use as well, then you will need to migrate all of these resources.  This means you will also have to recreate your VNET peerings to the new VNET.

NOTE:  If you have a Basic SKU load-balancer deployed now, for a single gateway, then you need to deploy a new load-balancer in the new resource group.  The single gateway template does not deploy a load-balancer; the HA template does.  However, be careful:  You will want to deploy a Standard SKU load-balancer because Azure is ending support for Basic SKU objects in September 2025.  If you have Basic SKU objects now, then you need to upgrade the Basic SKU IP addresses to Standard SKU IP addresses after you move them and before you can attach them to a Standard SKU load-balancer.

If you have a Standard SKU load-balancer (for a cluster), then you are ok.

 

 

0 Kudos
PhoneBoy
Admin
Admin

One thing I know changed in R81.20 is the disk partitioning.
These changes can only be implemented through a clean install.
There may be other underlying changes that necessitate this approach as well.

0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.