Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
aner_sagi
Contributor
Jump to solution

Smartcenter gaia on nutanix ?

Hi All,

A new customer of mine want to move his R80.10 smartcenter (currently on Hyper-V) to Nutanix.

is it supported ?

Thanks in advance

Aner

50 Replies
Herold
Contributor
Thanks for the quick feedback. I have a customer that want to run an MDS HA Pair, one on Nutanix and one on VMware. Is there any timelines that can be communicated?
0 Kudos
Amir_Arama
Advisor

Hi Dima,

i want to understand better, if i install the qcow2 file on nutanix. will i get regular vm of security mgmt/smartevent (for example) or it's limited in some ways or have any changes than regular VM? i just don't understand why mgmt Vm for nutanix is called CloudGuard. is it completely nomral ordinary mgmt ? would love to get to know the details on it. thx

0 Kudos
PhoneBoy
Admin
Admin

In short, yes. 

Our cloud offerings collectively collectively bear the CloudGuard brand. 
This includes virtualized versions of our Security Gateway (otherwise known as CloudGuard Network Security) and Management running on the various hypervisors.

0 Kudos
Amir_Arama
Advisor

Thank you

one more question please,

i installed the qcow2 image of r80.40 mgmt for nutanix, and it came with 100g disk. we tried and resize the disk on nutanix virtual platform to 1.8T, then i reboot the machine, and run lvm_manager, and i saw only 6gb free space. like it didn't recognize the extra space at all. do you have any idea of it's possible to increase disk size or it's hardened in the virtual appliance image? because especially on mgmt/log/smartevent servers we need an extra space for the logs..

0 Kudos
PhoneBoy
Admin
Admin

Not sure how resizing works with KVM images.

That said, the qcow2 images are primarily for gateways.
You should be able to install the regular ISO in a KVM image with a disk of whatever size you want.

Garrett_DirSec
Advisor

Hello @PhoneBoy  -- thanks for the post and update. 

Is there an official position on deployment of GAIA on Nutanix for both (a) SmartCenter, and (b) Gateway.

I know both officially supported with R80.40+, but the topic of whether to use qcow2 image or build from scratch for SmartCenter seems to be vague.

 

0 Kudos
PhoneBoy
Admin
Admin

Both should be supported (from R80.40) per the HCL: https://www.checkpoint.com/support-services/hcl/#os 
The qcow2 images we provide are effectively pre-installed images for gateways without First Time Wizard run.
That makes them much more suitable for automated deployments.
If you install from ISO, you can customize the partitioning, initial admin user, etc., which is definitely more appropriate for management. 

0 Kudos
Amir_Arama
Advisor

Hi

As someone on this post wrote. The regular iso with disk configured as scsi bus type shows the error that no hard drive found and ask me to choose drivers. Only with sata it's working fine. But it degrade performance and bot recommended by nutanix. There is sk on this message in relation to vmware which give the solution to change to sas. But no such option in nutanix. Can you specify what exactly is the problem and how to solve it. Thx

0 Kudos
Garrett_DirSec
Advisor

Hello @PhoneBoy -- per @DS9ish post, the pre-built qcow2 images for Nutanix include SCSI driver for Nutanix.   the ISO does not. 

Unsure if this on purpose or an oversight? 

If Nutanix recommends SCSI device for guest build, the only path forward for R80.40 and Nutanix is the qcow2 images. 

0 Kudos
PhoneBoy
Admin
Admin

Possible it was an oversight.
My guess is the TAC can get an ISO with the correct drivers on it, but sk94671 might be a better approach.

0 Kudos
RamGuy239
Advisor
Advisor

Is this the case though? I had a rough time with Nutanix AHV deployment with a customer, this was R80.40 just after it got released. We had a ton of issues. The management server was okay, but their cluster was having a very rough time crashing and bootlooping.

It took a great deal of time and pain until R&D told us about this CloudGuard for Private Cloud images and we were told to always opt for image over ISO for virtual deployment. Period. As they contain various tweaks you won't normally get by using ISO unless you are doing the same tweaks yourself.

Some noticeable differences between our R80.40 deployment on Nutanix AHV when using ISO vs image:

ISO: Running in userspace mode (USFW) by default.
Image: Running in kernel mode (KMFW( mode by default.

ISO: Not containing drivers for SCSI disks on Nuntanix AHV
Image: Containing drivers for SCSI disk on Nutanix AHV

ISO: Boot delay is using the default value of 0. Making it next to impossible to interrupt boot on virtual installations needed for going into maintenance mode.
Image: Boot delay is being pre-configured to a value of 5. Making it easy to interrupt boot via console if you need to enter maintenance mode.

ISO: Did not contain any hotfixes. It was R80.40 blank/vanilla.
Image: Did contain one hotfix. Never got any clarification on what exactly it contains.


Ever since I have never utilised ISO for virtual deployments. I'm using OVF-images from sk158292 for deploying management and gateway installations on VMware ESXi, I'm using the qcow2 for deploying on Nutanix AHV and KVM.

And I can't see any reason why one would prefer ISO? OVF for VMware ESXi is vastly superior as you have a portion during the deployment process where you can pre-fill information regarding admin password, sic password etc. Making the deployment even faster and more seamless. And just like with the image for Nutanix/KVM the OVF files contains various tweaks out-of-the-box like the added 5 sec boot delay which is extremely handy so won't have to modify the grub.conf yourself manually.


It makes you far less likely to select the wrong values on the host as well. As these things come pre-configured. When using OVF on VMware ESXi you don't have to think about what kind of storage controller, operating system version, NIC controller etc to chose as it's already contained within the image. It will also make sure that the disk will choose thick provision by default which is the recommended approach compared to think that most people end up choosing in most scenarios.

The only downside is the fact that if the default disk size of 100GB isn't enough you will have to add one additional one and follow sk94671 in order to re-size lv_current and lv_log.

Other than that I can't see any downsides at all? I don't really understand why these images are not being promoted. The SK to find them isn't even showing on the product page for R81 and R81.10 even though images do exist. They are very hidden.


The thing with userspace vs kernel mode on the gateways was the root cause of all our clustering issues when deploying gateways on Nutanix. If we had known about the images beforehand we wouldn't have spent weeks with a non-working cluster as using the image would have already toggled kernel-mode which according to RND at the time was required for clustering to work on Nutanix which is why the image came with this adjustment out of the box.

Now I simply take it for granted that ISO is contextless, which it pretty much is. Opting for images that seem to come with pre-defined tweaks and optimisations in place seems to be the obvious choice if you ask me.

Certifications: CCSA, CCSE, CCSM, CCSM ELITE, CCTA, CCTE, CCVS, CCME
0 Kudos
RamGuy239
Advisor
Advisor

We ended up re-deploying the management server using qcow2 as well. The management server didn't show any issues, but the disk performance seemed to be lacking. Using the image and SCSI over ISO that was limited to IDE make disk performance noticeably better.

Since then we have moved to R81. With all our prior knowledge from the R80.40 deployment, we simply went straight onto the qcow2 images for both management and gateways and didn't have any issues with the R81 deployment at all.

Certifications: CCSA, CCSE, CCSM, CCSM ELITE, CCTA, CCTE, CCVS, CCME
DS9ish
Participant

Read sk94671.  To add space to virtualized Gaia machines, the general path is to add a brand new disk to the guest, prep it for Linux LVM from within Gaia, add it to the volume group, then use lvm_manager to expand the logical volumes where you want more space.

Garrett_DirSec
Advisor

great reference to sk94671.   thanks! 

0 Kudos
RamGuy239
Advisor
Advisor

You should never re-size the disk for Check Point installations. All the virtual images (KVM, ESXi and Hyper-V) images come with a 100GB disk by default. When you are adding additional disk space this much be done by adding additional virtual disks, not by re-sizing the existing one.

Add one additional disk with the appropriate amount of disk space and follow sk94671 to make it available to be used with lvm_manager.
https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails=&solut...

Certifications: CCSA, CCSE, CCSM, CCSM ELITE, CCTA, CCTE, CCVS, CCME
Amir_Arama
Advisor

Thx

one more question please.

did you succeed to install nutanux ngt (guest tools) on the checkpoint mgmt qcow image?

if so, i would like to know the procedure

thx

0 Kudos
Amir_Arama
Advisor

Hi,

is it critical because i already did it and the machine is up and running like that for a month. is there a good reason i should recreate new vm with two disks as you said?

0 Kudos
RamGuy239
Advisor
Advisor

This is just branding. All virtual deployments, no matter if they are local, cloud, VMware, Hyper-V, KVM are all branded as "CloudGuard". As you can see from sk158292 all images are branded as "
CloudGuard for Private Cloud images".

Private Cloud is simply a fancy name for a local virtual environment. No matter if you are deploying a management installation or a gateway. If you are running it virtually and it's a Check Point installation the current naming means it's a "CloudGuard for Private Cloud".

I find this to be rather confusing. But this is how it is and these images are the recommended approach for deploying virtual installations. Like many have pointed out, for Nutanix installations the only way to have support for SCSI is to use the qcow2 for KVM/Nutanix. If you do the installation via ISO you won't be able to install using SCSI.


I don't know why Check Point is making these images so hard to find. There are images for R81, but there is no link to them on the R81 product page. The same now goes for R81.10. We have images for R81.10, but qcow2 for the time being and still there are no links to them on the R81.10 product pages.

R80.40 (sk160736) contains this link:
CloudGuard See sk158292

R81 (sk166715) and R81.10 (sk170416) for whatever reason do not. It makes it rather confusing and no wonder why most people end up using ISO for virtual deployments and not images even though using images are pretty much the superior way for deploying in pretty much every respect.

 

Certifications: CCSA, CCSE, CCSM, CCSM ELITE, CCTA, CCTE, CCVS, CCME
Amir_Arama
Advisor

Hi

Did you find any solution to install mgmt on nutanix with scsi disk? Nutanix recommends using scsi for maximum performance and its important for mgmt/log server.

0 Kudos
DS9ish
Participant

@Amir_AramaI was able to import the R80.40 OpenStack / Nutanix AHV / KVM qcow2 image from sk158292 to our Nutanix image service, then clone the disk as a SCSI disk during creation of the Management guest VM.  This created a system that booted and was ready to run the first-time wizard.  From there I was able to install it as a secondary management server, and sync to it from our primary.  At this point I am currently actively testing this build so I don't have further results to share (positive or negative).

Note this does only give you the single 100GB disk - as I indicated in my previous post I was able to add additional storage capacity by following the general process outlined in sk94671: creating a new disk, adding it to the VM definition, then going through the LVM process within Gaia to add the new disks' capacity at the OS level.

Amir_Arama
Advisor

did you encountered the issue i'm having? while doing ethtool eth0 it shows only link detected:yes without the other lines. and on show interface eth0 it shows link up, but speed: N/A duplex N/A even it has ping connectivity. i wonder if it's by deisgn or an error? also after you first start the vm with the disk cloned from image, should it require for first time wizard, or should it be all set? thx

0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events