Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Johan_Hillstrom
Contributor
Jump to solution

Mid-size appliances can not be managed locally when configured with SSD option

We are in the process of replacing some old 4200s and are looking at a mix of 3200s and 5100s as replacements.

However, when reading the fine-print we found this:

  1. Every Check Point appliance can either be managed locally with the available integrated security management1 or via central unified management. Using local management, the appliance can manage itself and one adjacent appliance for high availability deployments.

    1 not available when purchased with the SSD option

Does anyone know the reason for this?

Could it be related to disk size?

If so, why aren't there larger SSD options available?

We would really like to use SSDs, since spinning disks feels so 2012...

Over the last few years, most hardware failures have been disk related, so SSD seems the way to go, both in terms of speed and reliability.

0 Kudos
1 Solution

Accepted Solutions
Benny_Shlesinge
Employee Alumnus
Employee Alumnus

The main reason is wear-out rates. Sizes being the second reason.

We created a model based on diagnostic data that we got from production environments and found that in case, standalone deployments will exceed the wear-out rates of SSDs with 2-3 years, while we aim at 5 as the bare minimum.

We did get customer feedback that for one of our main competitors the SSDs used tended to wear-out and this can become a real issue.

Saying that, as SSDs technologies keep improving we are re-considering this decision.

We today support standalone mode for 3100, 3200 (that typically run less traffic -> less logs) and 5900 (having a larger SSD than other 5000 appliances) and we are working towards qualifying all the other models to support it as well.

HTH,

Benny.

View solution in original post

11 Replies
Vladimir
Champion
Champion

I suspect that it is not so much the disk size, as the SSD wear that will be caused by logging and indexing. Should you write the logs locally, the impact on performance on SSD could be quite dramatic and will result in inconsistent behavior, making it hard to accurately size the appliance. It makes sense to offload these functions to remote management, or at least. remote logging server(s).

0 Kudos
Johan_Hillstrom
Contributor

Sorry for late reply, have been on holiday.
I don't really buy the argument that SSDs can't handle the wear, modern SSDs have pretty good wear-leveling algorithms. And performance is not an issue, even consumer-grade SSDs outperform any spinning disk in both read, write-speed and access-time.
We currently run all our ESX-farm on an all-flash array, including SmartEvent/Log and other IO intensive hosts.

I found an old requirement for R80 stating minimum 500GB

0 Kudos
Martin_Zemljic
Explorer

We got official reason for this. Check Point says that SSD disks are not fast enough for management.

(Needless to say, policy changed between our order and delivery...; Elsewhere I still have running 3xxx + SSD + MGMT).

0 Kudos
Johan_Hillstrom
Contributor

Sounds really strange to me, almost any SSD outperforms spinning disks today, even in sustained reads or writes.

0 Kudos
Martin_Zemljic
Explorer

I am not saying I do believe.

Also, if management requires 500 GB (your reply to V. Yakovlev), then it would not run on 3200 model with HDD. It is sold with 320 GB disk; SSD gets 240 GB.

0 Kudos
Johan_Hillstrom
Contributor
0 Kudos
Timothy_Hall
Champion
Champion

Vladimir Yakovlev‌ had some great insights on this matter.  The fact that local management (and associated traffic logging) on an appliance is not allowed with an SSD tells me it is indeed a wear leveling or performance consistency issue.  Not sure why Check Point would say that "SSD disks are not fast enough for management" unless there is some kind of bottleneck in the current Gaia mass storage drivers, possibly related to the current unofficial recommendation to leave SMT/Hyperthreading disabled on a 2.6.18 kernel SMS due to I/O driver limitations.  I suppose it is possible that having a speedy SSD underneath could exacerbate the driver bottleneck, or cause inconsistent I/O performance if the drive is busy wear-leveling itself during heavy logging periods.  The current Gaia kernel (2.6.18) does not support TRIM (it was added in 2.6.33 and later), but as we all know there is a Gaia kernel update forthcoming, first for the SMS and then for the gateway.  Wouldn't surprise me if Check Point warms up to SSD's a bit more after this kernel update.

I don't think having an SSD on a pure security gateway would make much difference performance-wise since it almost never hits the hard drive, unless it is having to log to its own drive or there is insufficient RAM causing swapping which is another matter entirely. 

--
Second Edition of my "Max Power" Firewall Book
Now Available at http://www.maxpowerfirewalls.com

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
PhoneBoy
Admin
Admin

The official reason for the limitation is the smaller disk size of the SSDs versus their HDD counterparts.

Vladimir
Champion
Champion

I am not sure if this is still applicable: The case against SSDs | ZDNet , it may have been the basis for CP not endorsing use of single SSDs for management servers.

Generally, all cloud providers do use SSDs by default, but it should be noted that those are part of the redundant arrays and that data stored on those could be dynamically relocated from the under-performing media to reconditioned or new without your knowledge.

Additionally, and it was brought-up in one of the other discussions in CheckMates, CP is using rather old kernel for Gaia and there are unresolved issues with partitions alignment. 

Here (in the first reply) is a pretty good breakdown of SSD operation that sheds some light on it:

partitioning - Is partition alignment to SSD erase block size pointless? - Super User 

Marcos_Vieira
Contributor

I have just opened a service request and the answer about 3000 SSD and Standalone mode was positive. However...

Looking at the sk110052 it says: Solid State Drive (SSD) is supported with a Standalone Deployment only on 3100/3200 appliances.

At the same sk110052, three lines above, it has a confused text saying: When using Solid State Drive (SSD), the local logging feature in SmartConsole is not supported on the Security Gateway. In case of a disconnection between the Log Server and the Security Gateway, logs will automatically be saved locally on the Security Gateway.  So it is contradicting itself in the same paragraph.

Why the Standalone is supported in the 3000 SSD family but not in the other appliances (from 5100 up to 5800) using the same disk (at least they have the same size)?

0 Kudos
Benny_Shlesinge
Employee Alumnus
Employee Alumnus

The main reason is wear-out rates. Sizes being the second reason.

We created a model based on diagnostic data that we got from production environments and found that in case, standalone deployments will exceed the wear-out rates of SSDs with 2-3 years, while we aim at 5 as the bare minimum.

We did get customer feedback that for one of our main competitors the SSDs used tended to wear-out and this can become a real issue.

Saying that, as SSDs technologies keep improving we are re-considering this decision.

We today support standalone mode for 3100, 3200 (that typically run less traffic -> less logs) and 5900 (having a larger SSD than other 5000 appliances) and we are working towards qualifying all the other models to support it as well.

HTH,

Benny.

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events