Thanks to this comment on GitHub, I figured it out. There are so many configuration options accepted by this Helm chart that I missed it while reading the docs.
So, turns out that this Helm chart accepts a configuration option prometheus.prometheusSpec.containers
. Its description in the docs says: "Containers allows injecting additional containers. This is meant to allow adding an authentication proxy to a Prometheus pod". But obviously, it is not limited to the authentication proxy and you can pass any container spec here and it will be added to Prometheus StatefulSet created by this Helm chart.
Here is the sample configuration I used. Some key points:
- Please replace the values in angle brackets with your actual values.
- Feel free to remove the arg
--include
. I added it because nginx_http_requests_total
is the only Prometheus metric I want to send to Stackdriver for now. Check Managing costs for Prometheus-derived metrics for more details about it.
- To figure out the name of volume to use in
volumeMounts
:
- List down StatefulSets in Prometheus Operator namespace. Assuming that you installed it in
monitoring
namespace: kubectl get statefulsets -n monitoring
- Describe the Prometheus StatefulSet assuming that its name is
prometheus-prom-operator-prometheus-o-prometheus
: kubectl describe statefulset prometheus-prom-operator-prometheus-o-prometheus -n monitoring
- In details of this StatefulSet, find container named
prometheus
. Note the value passed to it in arg --storage.tsdb.path
- Find the volume that is mounted on this container on same path. In my case, it was
prometheus-prom-operator-prometheus-o-prometheus-db
so I mounted the same volume on my Stackdriver sidecar container as well.
prometheus:
prometheusSpec:
containers:
- name: stackdriver-sidecar
image: gcr.io/stackdriver-prometheus/stackdriver-prometheus-sidecar:0.7.5
imagePullPolicy: Always
args:
- --stackdriver.project-id=<GCP PROJECT ID>
- --prometheus.wal-directory=/prometheus/wal
- --stackdriver.kubernetes.location=<GCP PROJECT REGION>
- --stackdriver.kubernetes.cluster-name=<GKE CLUSTER NAME>
- --include=nginx_http_requests_total
ports:
- name: stackdriver
containerPort: 9091
volumeMounts:
- name: prometheus-prom-operator-prometheus-o-prometheus-db
mountPath: /prometheus
Save this yaml to a file. Let's assume you saved it to prom-config.yaml
Now, find the release name you have used to install Prometheus Operator Helm chart on your cluster:
helm list
Assuming that release name is prom-operator
, you can update this release according to the config composed above by running this command:
helm upgrade -f prom-config.yaml prom-operator stable/prometheus-operator
I hope you found this helpful.
kuebctl get statefulset prometheus-prom-operator-prometheus-o-prometheus -o yaml --export file.yaml
, add the required fields thenkubectl apply -f file.yaml
– Patrick WownerReferences: - apiVersion: monitoring.coreos.com/v1 blockOwnerDeletion: true controller: true kind: Prometheus name: prom-operator-prometheus-o-prometheus uid: 5f3e13d6-0caa-428c-88b0-39c883f93ec4
Could it be responsible for this? – Muhammad AnasStatefulset
to include sidecar's definition and recreate the pods? – Dawid Kruk