- CheckMates
- :
- Products
- :
- Quantum
- :
- Skyline
- :
- Re: Writing to Prometheus Issue for all gateways/m...
Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Are you a member of CheckMates?
×
Sign in with your Check Point UserCenter/PartnerMap account to access more great content and get a chance to win some Apple AirPods! If you don't have an account, create one now for free!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Jump to solution
Writing to Prometheus Issue for all gateways/managers but one
I have been trying to configure my CheckPoint estate to stream open telemetry to a prometheus/grafana instance. I have followed all relevent documentation/videos yet only one appliance out of a few is working.
For example i have a pair of managers both configured the same but only one has logs successfully written to prometheus. The one that doesnt work shows the following in the otelcol.log:
The same can be applied to some gateways that i have also tried this on.
Has anyone else encountered this issue?
2024-12-06T15:11:04.302Z error exporterhelper/queued_retry.go:391 Exporting failed. The error is not retryable. Dropping data. {"kind": "exporter", "data_type": "metrics", "name": "prometheusremotewrite", "error": "Permanent error: Permanent error: Post \"http://10.128.251.44:9090/api/v1/write\": context deadline exceeded", "dropped_items": 269}
go.opentelemetry.io/collector/exporter/exporterhelper.(*retrySender).send
go.opentelemetry.io/collector/exporter@v0.82.0/exporterhelper/queued_retry.go:391
go.opentelemetry.io/collector/exporter/exporterhelper.(*metricsSenderWithObservability).send
go.opentelemetry.io/collector/exporter@v0.82.0/exporterhelper/metrics.go:125
go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).start.func1
go.opentelemetry.io/collector/exporter@v0.82.0/exporterhelper/queued_retry.go:195
go.opentelemetry.io/collector/exporter/exporterhelper/internal.(*boundedMemoryQueue).StartConsumers.func1
go.opentelemetry.io/collector/exporter@v0.82.0/exporterhelper/internal/bounded_memory_queue.go:47
1 Solution
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
this was resolved - firewall issue. Was not allowing VIP address through firewall.
2 Replies
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Seems I can get both managers and event box to work but i can not get any of my 5 clusters to work. All return the above issue in otelcol.log.
Can anyone suggest some log files worth looking in on either cluster members or prometheus. Looking at the resources on prometheus server there is no trace of high utilisation across disk/network/cpu/memory.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
this was resolved - firewall issue. Was not allowing VIP address through firewall.
