- CheckMates
- :
- Products
- :
- Quantum
- :
- Skyline
- :
- Re: Skyline - capacity of Prometheus/Grafana serve...
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Are you a member of CheckMates?
×- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Skyline - capacity of Prometheus/Grafana server
Any minimum and general capacity requirements for a Skyline server?
The usual stuff: Storage, RAM, CPU etc.
Lets say, supporting data from 50 gateways.
Regards, Poul Erik
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Grafana and Prometheus are Open Source.
You can see the minimal requirements for Grafana here: https://grafana.com/docs/grafana/latest/setup-grafana/installation/
Prometheus doesn't provide any hard guidance, but they do provide some details here:
- RAM: https://www.robustperception.io/how-much-ram-does-prometheus-2-x-need-for-cardinality-and-ingestion/
- Storage: https://prometheus.io/docs/prometheus/latest/storage/
How this translates to 50 Check Point devices, I can't say.
@Tsahi_Etziony approximately what specs are we using to test this?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The storage requirement for the Prometheus database for storing Skyline data is roughly 25MB per reporting device for the default retention period of 15 days. This default can be changed in the Prometheus server configuration, and you should change the calculation accordingly.
We don't have exact numbers for the CPU and RAM requirements, but I can tell you that we have tested a couple dozen reporting devices to a Prometheus&Grafana server installed on a little VM with 4 cores and 8GB of RAM. For larger environments, I would advise getting something a little more robust, but you can see that it doesn't have to be too big.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I was looking for this same information. What I can tell you is that for our POC I have it running on about 75 devices. The 4CPU Linux box is pretty busy with just a handful of alerts running and no one actively hitting the web dashboards. I'm trying to work out how this is going to scale out to 700 devices. A single 40 CPU VM probably isn't a great solution for a linear 10x increase.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
We have grown out our deployment some more. One thing we noticed, Prometheus is not very efficient at doing its data pruning when its constrained by drive size. We had hit the limit of the size set by "storage.tsdb.retention.size=" and it was spending a lot of CPU and IO cycles making room for the new incoming data. In our environment at least setting "--storage.tsdb.retention.time" and letting it do its cleanup that way and manually managing the size of the data store seems a much better use of resources. Keep the storage.tsdb.retention.size set just to keep from filling up the drive accidently.
Here are our current stats after fixing the storage. before we were running 80% + across the CPU's most of the day.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @David_Evans ,
For much bigger environments I recommend to also take a look at Victoria-metrics, we had some good experience with it as well, and the API is almost identical to Prometheus. https://last9.io/blog/prometheus-vs-victoriametrics/. From what I understood it is more resource efficient and easier to scale then Prometheus. In general, it is also a possibility to split the Prometheus to multiple instances, and connect Grafana to all of them, however it will slightly increase the overhead of management of the environment.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
It is varied, as we have multiple environments, but usually we use a server with the following spec -
4 Cores, 8-16 GB Memory, 50-100 GB Disk space.
Usually it is enough for all of our internal lab needs. But there is no specific recommendation - scaling should be done by monitoring and observing the current load on the system.
