1
votes

We have setup Prometheus + Grafana on our GKE cluster using the stable/prometheus-operator helm chart. Now we want to export some metrics to Stackdriver because we have installed custom metrics Stackdriver adapter. We are already using some Pub/Sub metrics from Stackdriver for autoscaling few deployments. Now we also want to use some Prometheus metrics (mainly nginx request rate) in the autoscaling of other deployments.

So, my first question: Can we use Prometheus adapter in parallel with Stackdriver adapter for autoscaling in the same cluster?

If not, we will need to install Stackdriver Prometheus Sidecar for exporting the Prometheus metrics to Stackdriver and then use them for autoscaling via Stackdriver adapter.

From the instructions here, it looks like we need to install Stackdriver sidecar on same pod on which Prometheus is running. I gave it a try. When I run the patch.sh script, I got the message back: statefulset.apps/prometheus-prom-operator-prometheus-o-prometheus patched but when I inspected the statefulset again, it didn't have the Stackdriver sidecar container in it. Since this statefulset is created by a Helm chart, we probably can't modify it directly. Is there a recommended way of doing this in Helm?

1
you can still modify the statefulset even though it was created by Helm. Helm does not maintain the state of the deployment, it just deploys resources based on charts and templates.Patrick W
I've sometimes seen issues with patching live resources. You can update the deployment by updating the helm template or something like kuebctl get statefulset prometheus-prom-operator-prometheus-o-prometheus -o yaml --export file.yaml, add the required fields then kubectl apply -f file.yamlPatrick W
@PatrickW Nope, this doesn't seem to work. Even if I simply delete the statefulset, it automatically gets recreated. There is this block of lines in the generated yaml file of this statefulset. ownerReferences: - apiVersion: monitoring.coreos.com/v1 blockOwnerDeletion: true controller: true kind: Prometheus name: prom-operator-prometheus-o-prometheus uid: 5f3e13d6-0caa-428c-88b0-39c883f93ec4 Could it be responsible for this?Muhammad Anas
Did you try to edit the Helm chart responsible for spawning this Statefulset to include sidecar's definition and recreate the pods?Dawid Kruk
@MuhammadAnas I'm glad to hear you resolved it. Can you please share your results as an answer as it will be more visible to other community members?Dawid Kruk

1 Answers

4
votes

Thanks to this comment on GitHub, I figured it out. There are so many configuration options accepted by this Helm chart that I missed it while reading the docs.

So, turns out that this Helm chart accepts a configuration option prometheus.prometheusSpec.containers. Its description in the docs says: "Containers allows injecting additional containers. This is meant to allow adding an authentication proxy to a Prometheus pod". But obviously, it is not limited to the authentication proxy and you can pass any container spec here and it will be added to Prometheus StatefulSet created by this Helm chart.

Here is the sample configuration I used. Some key points:

  1. Please replace the values in angle brackets with your actual values.
  2. Feel free to remove the arg --include. I added it because nginx_http_requests_total is the only Prometheus metric I want to send to Stackdriver for now. Check Managing costs for Prometheus-derived metrics for more details about it.
  3. To figure out the name of volume to use in volumeMounts:
    1. List down StatefulSets in Prometheus Operator namespace. Assuming that you installed it in monitoring namespace: kubectl get statefulsets -n monitoring
    2. Describe the Prometheus StatefulSet assuming that its name is prometheus-prom-operator-prometheus-o-prometheus: kubectl describe statefulset prometheus-prom-operator-prometheus-o-prometheus -n monitoring
    3. In details of this StatefulSet, find container named prometheus. Note the value passed to it in arg --storage.tsdb.path
    4. Find the volume that is mounted on this container on same path. In my case, it was prometheus-prom-operator-prometheus-o-prometheus-db so I mounted the same volume on my Stackdriver sidecar container as well.
prometheus:
  prometheusSpec:
    containers:
      - name: stackdriver-sidecar
        image: gcr.io/stackdriver-prometheus/stackdriver-prometheus-sidecar:0.7.5
        imagePullPolicy: Always
        args:
          - --stackdriver.project-id=<GCP PROJECT ID>
          - --prometheus.wal-directory=/prometheus/wal
          - --stackdriver.kubernetes.location=<GCP PROJECT REGION>
          - --stackdriver.kubernetes.cluster-name=<GKE CLUSTER NAME>
          - --include=nginx_http_requests_total
        ports:
          - name: stackdriver
            containerPort: 9091
        volumeMounts:
          - name: prometheus-prom-operator-prometheus-o-prometheus-db
            mountPath: /prometheus

Save this yaml to a file. Let's assume you saved it to prom-config.yaml

Now, find the release name you have used to install Prometheus Operator Helm chart on your cluster:

helm list

Assuming that release name is prom-operator, you can update this release according to the config composed above by running this command:

helm upgrade -f prom-config.yaml prom-operator stable/prometheus-operator

I hope you found this helpful.