Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
raindrop_true
Explorer

Building HomeLab

Hi,

I want to build my own lab with physical Check Point appliances (instead of VMs in GNS3). I have found few cheap models like 4400 T-140 and Smart-1 205. They should support version R80.40 and my question is can I install there R81?

And is here anyone who keep few chassis at home and could share thoughts/experience about building home lab with Check Point appliances? I would appreciate

Thanks

0 Kudos
22 Replies
_Val_
Admin
Admin

Personally, I would do VMware of any kind. 4000 series do not support R81. Same for Smart-1 205.

Full information about appliance support lines and supported versions are here: https://www.checkpoint.com/support-services/support-life-cycle-policy/#appliances-support

 

G_W_Albrecht
Legend
Legend

Yep, VM i would also suggest. Purpose is what counts here...

CCSE CCTE CCSM SMB Specialist
the_rock
Legend
Legend

Keep in mind that for R81, there is really higher requirement for any resource. For example, just one thing R&D told me in email thread I had with them, to have infinity threat prevention work properly, you would need AT LEAST 2 GB free memory. Personally, to me anyway, thats not something most customers would be able to do, unless you purchase a very expensive appliance. And that is just one example...

0 Kudos
Bob_Zimmerman
Authority
Authority

If you don't need your lab to be running all the time, I would really recommend just a decent VM host. One of my older VM hosts is a NUC6i3 with 32 GB of RAM. That model (and all subsequent models with Core i processors) can actually use 64 GB of RAM, though that had not been confirmed when I built mine. NUC6i3SYH is $155 or less including shipping on eBay, two 32 GB DDR4 SO-DIMMs from Crucial are $300 total on Amazon, a 1 TB Samsung 870 EVO is $115 on Amazon. ESXi and Hyper-V Server are both free and both highly scriptable. That's $570 for two cores plus hyperthreading, 64 GB of RAM, and 1 TB of storage. Enough to run at least five big lab VMs. With how I run mine, I could probably do a management, six firewalls, and about 20 OpenBSD routers/switches/endpoints before it felt constrained.

Regarding scriptability, I don't have an MDS license, so I used VirtualBox to build a VM, gave it predictable SSH and TLS keys, then took a snapshot before its first boot. Every time I restore to that snapshot, it gets a new 15-day eval license. I can then use a script to clone the VM with vboxmanage, run config_system, copy over the new CPUSE and jumbo, install them, and so on. Takes ~20 minutes to run, but it gives me a very predictable environment. I add management config with mgmt_cli commands at the end of my build script.

Official images for Vagrant which get a new 15-day eval license when deployed would sure be nice. Hint, hint.

 

If you're sure you want physical boxes, but don't need official call-the-TAC support, you can get away with pushing past official support limits. As a specific example, I got a 2200, swapped in a 400 GB SSD, and bumped the RAM to 8 GB to give me a management server (really a standalone, which I built using a tweaked config_system) as a development target. It's running R80.40 right now. I don't need to call the TAC, I just need the license to not expire. Make sure the box can license itself before you buy it. Lots of resold boxes have been "traded in" by their former owners, so their licenses are no longer valid in the User Center.

(1)
JanVC
Collaborator

0 Kudos
Bob_Zimmerman
Authority
Authority

Those get new SSH and TLS keys every time you deploy from them, don't they? That's fine for production, but annoying for development use. With a proper Vagrant image, it wouldn't matter too much what keys it had originally, as the Vagrant build script could overwrite them for you.

0 Kudos
the_rock
Legend
Legend

I agree with all the responses here. Take advice guys sent, they are one of the best in CP!

0 Kudos
PhoneBoy
Admin
Admin

There’s very little to be learned/gained from using older physical Check Point appliances for labs.
You’re far better off installing VMWare or similar on a older server or NUC and build whatever you that way.

0 Kudos
raindrop_true
Explorer

Probably having a piece of Check Point's hardware next to me would be more entertaining, but indeed there is little to none gain from that kind of approach.
Thank you for your all replies, they have helped me to clarify this topic.
I'm going to shift my focus to VMware.

Thank you

0 Kudos
Bob_Zimmerman
Authority
Authority

Thought I would expand on this a little more. Still collecting data, and posting what I have over on CPUG.

I did some performance testing of management API calls. My early results show a big performance difference between older Check Point branded servers and VMs. My 2200 takes an average of 14.9846 seconds to show 500 full application/site objects, and 4.9269 seconds to show just one application/site's UUID. Meanwhile, a VM (two cores, 8 GB RAM) on my personal desktop (two Xeon X5675 processors, 96 GB of RAM, 2 TB consumer SSD) takes 3.96025 seconds on average to show 500 full objects.

Yes, the VM can show 500 full objects faster than the older device can show one single UUID. The VM has a little under twice as much processor time per real second, and the difference is much bigger than could be explained by that alone. I think part of it is that the Xeon cores have SSE4, while the Atom cores do not (so software which wants to use an SSE4 instruction would have to fall back on software emulation of the instruction), but I probably don't care enough for instruction-level analysis.

0 Kudos
Timothy_Hall
Champion
Champion

Interesting findings.  The API server process and the cpm process are both multi-threaded with Java and therefore should be able to spread load across multiple cores.  Some of the delta is almost certainly the CPU difference:

2200: Intel Atom D525 1.80GHz Dual-Core; Passmark Score: 423

Xeon X5675 3.06Ghz; Passmark Score: 1574

So roughly a 4X faster CPU which is almost right in line with the 2200 taking 14.9846 seconds for 500 objects while your Xeon only takes 3.96025 seconds.  Passmark is certainly not perfect but is reasonable approximation of relative CPU power.  One other factor that can significantly affect API performance is the heap size which is allocated based on the core and memory resources of the box in questions, this is viewable with the api status command:

Profile:
------------
Machine profile: Large SMC env resources profile without SME
CPM heap size: 1024m
API heap size: 256m

Would be curious to see what the heap sizes reported by api status on both systems shows.  If the heap size of API and/or CPM is lower on the 2200, the CPU on that system may spend a fair amount of time performing Java Garbage Collection (thread name GC Slave) to free up heap space instead of spending time actually processing your request.  

Some of the difference could be disk speed as well, but that should be somewhat minimized by running the same query multiple times; assuming the data can be cached and will fit in the 2200's RAM the disk I/O delay of subsequent API calls should be minimal.

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
0 Kudos
Bob_Zimmerman
Authority
Authority

I’ve been tracking the information from api status as I cut resources off of the VM, and I’ll be sure to share when I have the data set I wanted to collect.

I/O shouldn’t be the limit, as the S3700 I have in my 2200 performs far better than the consumer drive in my Mac Pro. The disk image might wind up cached in the host's unused RAM, though. I'm running each call 1000 times.

The Passmark scores are per-core, right? Overnight, I removed more hardware from the VM and ran the API call stats with one core, 8 GB of RAM. Now I’m trying one core, 4 GB. It’s *still* performing better than the 2200 so far. The 2200's processor supports hyperthreading, but it's disabled. The X5675 also supports it, but the VM isn't aware of it; for scheduling purposes, the VM sees only single-threaded processor cores. I may have to limit it to fractional time on one core next.

0 Kudos
Bob_Zimmerman
Authority
Authority

I've collected the data I care about. It's posted here:

https://github.com/Bob-Zimmerman/CPAPI-Stats

0 Kudos
Cyber_Serge
Collaborator

How about trying the demo mode from SmartConsole instead of building everything?

0 Kudos
PhoneBoy
Admin
Admin

Demo Mode is definitely useful for quickly looking at stuff.
You can even make API calls against your Demo Mode instance if you know how 🙂

There are some limitations in Demo Mode, of course.
One I just figured out recently and got confirmation from R&D on is that the list of Updatable Objects shown in a Demo Mode instance is fixed around the time of GA for that specific release.
So, for example, the recently released Check Point Services object will not show until R81.10 goes GA and even then it will only show in R81.10 Demo Mode.

0 Kudos
Bob_Zimmerman
Authority
Authority

Demo mode in SmartConsole is great, but you can't use it to learn how to get around on a VSX firewall, for example. At a guess, I would say it covers about 70% of what you need to know day-to-day, but more like 20% of what you need to know to set up the product or to fix it when things go wrong.

0 Kudos
Vladimir
Champion
Champion

The only advantage of having older physical hardware in your lab is the L2 bonding functionality, as far as I can tell. SmartConsole DemoMode in R80+ become way too limited to be seriously useful. It is more of a look-up for new basic functionality and properties. ESXi lab was good to me, as I was able to build quite a few replicas of clients' environments to walk them through various upgrade and modification processes including multi-site VSX clusters with dynamic routing. Had to use Vyatta for the rest of the routing infrastructure components.

0 Kudos
Bob_Zimmerman
Authority
Authority

The advantage of an old Check-Point-branded server for me is I can keep running it "forever" without needing to issue and apply a new eval license every four weeks (which isn't a scriptable process).

My main personal VM environment runs SmartOS, because I'm clearly a crazy person. While ESXi is cool, it's hard to netboot it, so the compute nodes have to maintain significant local state. With SmartOS, my compute nodes are effectively stateless, so all of their storage is just for my VMs, and upgrades are trivial. The headnode retains state, and the whole environment can be recreated from that state.

For routers, servers, and a few infrastructural services, it's worth checking out OpenBSD. It comes with OSPF, BGP, MPLS LDP, load balancing, an HTTP(S) server, a DHCP server, a RADIUS server (which can authenticate against the OpenBSD user database!), and a lot more. Great little utility OS for a lab, and it supports autoinstall via something a bit like a config_system file, but discovered via DHCP. Seriously, you can set up a whole bunch of endpoints without ever logging into them.

0 Kudos
Vladimir
Champion
Champion

Yeah, the four week licenses for labs are pain. Then again, if you are using any blades besides FW and VPN, you have same issue regardless.

As to ESXi netboot: possible, if not trivial, but why? You can literally boot the sucker from USB/SD card with your VMs still residing on iSCSI/NFS targets.

Thanks for the OpenBSD pointer. I'll take a look to see if it works for me. I like Vyatta because of a single configuration shell and file, even if its syntax is quite different from that of Cisco.

0 Kudos
Bob_Zimmerman
Authority
Authority

VSX, management, identity awareness, and a few other things also don't require subscriptions. I mostly care about the management API right now.

As for netboot, the why is mostly about maintenance. Updating to a new SmartOS release involves downloading it on the headnode, telling the headnode that should be the default image, then a rolling reboot of your compute nodes. No need to copy anything to the compute nodes, no need to physically swap a card or thumb drive, and very rapid rollback if an issue is reported (just tell the headnode to hand out the old image, reboot the compute node having the problem, and it will come up with the OS it had before).

0 Kudos
Vladimir
Champion
Champion

"VSX, management, identity awareness, and a few other things also don't require subscriptions. I mostly care about the management API right now."

This makes perfect sense.

With ESXi, the rollback within single major release is actually pretty simple: https://kb.vmware.com/s/article/1033604

If you are jumping major releases, i.e. 6 to 7, it gets more complicated.

0 Kudos
Bob_Zimmerman
Authority
Authority

Yeah, rolling back ESXi updates isn't awful, but it does require console access to the node. SmartOS is rolled back purely through headnode-side changes (which can be made via web UI, CLI command, or API call), and power-cycling the compute node. You can power-cycle a machine via IPMI call, Redfish call, or even simply a managed PDU cutting power to the socket and restoring it. Console access is a slightly higher threshold, and is timing-sensitive (implying there is a delay added to the boot process every time you don't need to revert).

For one or two VM hosts (like a personal lab), it's a minor difference at best, yes. For larger environments, I like the SmartOS way a lot.

Plus SmartOS has the ability to run "OS VMs", which are illumos zones. No hardware emulation. Same thing as Solaris zones or FreeBSD jails, and like a vastly better version of the Docker runtime. Plus ZFS, SMF, DTrace, fmd, and a few other niceties. I like that my VM environment runs a normal OS which I know, unlike ESXi's weird vaguely-Linux-but-not-really internals. I like Hyper-V Server for similar reasons (I'm surprised how much I like PowerShell!), though it's kind of painful to get running without a domain controller.

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events