0
votes

We have an openshift cluster (v3.11) with prometheus collecting metrics as part of the platform. We need long term storage of these metrics and our hope is to use our InfluxDB Time Series DB to store them.

The Telegraf agent (the T in the TickStack) has an input plugin for prometheus and an output plugin for InfluxDB so this would seem like a natural solution.

What I'm struggling with is how is the telegraf agent setup to scrape the metrics within Openshift, I think the config and docs relate to prometheus outside of openshift? I can't see any references to how to set this up with Openshift.

Does a telegraf agent need to reside on openshift itself or can this be setup to collect remotely via a published route?

If anyone has any experience setting this up or can provide some pointers I'd be grateful.

1

1 Answers

2
votes

Looks like the easiest way to get metrics from OpenShift Prometheus using Telegraf is to use the default service coming with OpenShift. URL to scrape from is: https://prometheus-k8s-openshift-monitoring.apps.<your domain>/federate?match[]=<your conditions>

As Prometheus stays behind the openshift authentication proxy the only challange is authentication. You should add a new user into the prometheus-k8s-htpasswd secret and use his credentials for scraping.

To do this you should run htpasswd -nbs <login> <password> and then add output to the end of prometheus-k8s-htpasswd secret.

The other way is to disable authentication for /federate endpoint. To do this you should edit the command in the prometheus-proxy container inside prometheus stateful set and add -skip-auth-regex=^/federate option.