Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
israelsc
Collaborator
Collaborator

Skyline Server Sizing

Hello, everyone!
I recently deployed a “Skyline” server in a lab environment where, for testing purposes, I configured it with 64GB of disk space, 2 CPU cores, and 4GB of RAM.
In this lab, I am monitoring only one Quantum Spark 1550 R81.10.17, and everything is working very well.

To retain historical information from the dashboards, I configured the following line in the Prometheus YAML to store 30 days of information:
--storage.tsdb.retention.time=30d

My general question is, how can I size a Skyline server?
I would like to know how much computing power to consider for an environment that has almost 200 firewalls (mostly Quantum Spark) [How much RAM? How much CPU? How much storage?]

If I want to store more than 30 days (90 for example)
how much storage should I consider?

Is there any document or guide to make a Skyline Sizing?

Greetings to all!

0 Kudos
5 Replies
the_rock
MVP Platinum
MVP Platinum

See if this sk helps.

https://support.checkpoint.com/results/sk/sk178566

Best,
Andy
0 Kudos
toblun
Participant
Participant

I haven't found any official documentation for sizing, but 4 vCPUs, 16GB RAM, and 500GB works well for 100+ objects with 60-90 days of history.. Just remeber to configure the retention correctly !

0 Kudos
JozkoMrkvicka
Authority
Authority

What do you mean by configuring retention correctly ? On Prometheus it is just about setting correct value for parameter "--storage.tsdb.retention.time" and restarting the service. If you dont have free space on partition where Prometheus stores the data, you might run into trouble, thats clear 🙂

Kind regards,
Jozko Mrkvicka
0 Kudos
Vincent_Bacher

Hi,

i don't know about an official Check Point sizing documentation for "Skyline Server".

Since Skyline uses Open Telemetry (Prometheus), it’s best to rely on general Prometheus sizing and best-practice guidelines (targets, scrape/remote write interval, metric cardinality, retention time). These give a good framework to estimate CPU, RAM, and disk requirements.

Sizing is usually iterative and should be adjusted as the number of monitored gateways grows.

And it also depends on whether you just want to read the standard metrics or expand the list. There's much more to it than the default setting.

and now to something completely different - CCVS, CCAS, CCTE, CCCS, CCSM elite
0 Kudos
David_Evans
Collaborator

I am able to push about 500 devices to this linux box.    I did have to lower the scrape interval down for it to keep up, I have nothing faster than once a minute now.
800gig on the data partition will just keep 6 months of data.


If I get more than 3 or 4 people looking at some complex dashboards or running them over long time frames it noticeably takes a long time to pull up the data.   

But still has both Prometheus and Grafana running on this VM.

16 core VM

 

Screenshot 2025-12-12 065226.pngScreenshot 2025-12-12 065342.png

0 Kudos
Upcoming Events

    CheckMates Events