6
votes

I am using Prometheus to monitor my Kubernetes cluster. I have set up Prometheus in a separate namespace. I have multiple namespaces and multiple pods are running. Each pod container exposes a custom metrics at this end point, :80/data/metrics . I am getting the Pods CPU, memory metrics etc, but how to configure Prometheus to pull data from :80/data/metrics in each available pod ? I have used this tutorial to set up Prometheus, Link

2
How do you expose pod level metrics at the mentioned end point? Do we need to add something special in the application's kubernetes deployment file?Jarvis

2 Answers

10
votes

You have to add this three annotation to your pods:

prometheus.io/scrape: 'true'
prometheus.io/path: '/data/metrics'
prometheus.io/port: '80'

How it will work?

Look at the kubernetes-pods job of config-map.yaml you are using to configure prometheus,

- job_name: 'kubernetes-pods'

        kubernetes_sd_configs:
        - role: pod

        relabel_configs:
        - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
          action: keep
          regex: true
        - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
          action: replace
          target_label: __metrics_path__
          regex: (.+)
        - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
          action: replace
          regex: ([^:]+)(?::\d+)?;(\d+)
          replacement: $1:$2
          target_label: __address__
        - action: labelmap
          regex: __meta_kubernetes_pod_label_(.+)
        - source_labels: [__meta_kubernetes_namespace]
          action: replace
          target_label: kubernetes_namespace
        - source_labels: [__meta_kubernetes_pod_name]
          action: replace
          target_label: kubernetes_pod_name

Check this three relabel configuration

- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
    action: keep
    regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
    action: replace
    target_label: __metrics_path__
    regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
    action: replace
    regex: ([^:]+)(?::\d+)?;(\d+)
    replacement: $1:$2
    target_label: __address__

Here, __metrics_path__ and port and whether to scrap metrics from this pod are being read from pod annotations.

For, more details on how to configure Prometheus see here.

4
votes

The link provided in the question refers to this ConfigMap for the prometheus configuration. It that ConfigMap is used then prometheus is already configured to scrape pods.

For that configuration (see relabel_configs) to have prometheus scrape the custom metrics exposed by pods at :80/data/metrics, add these annotations to the pods deployment configurations:

metadata:
  annotations:
    prometheus.io/scrape: 'true'
    prometheus.io/path: '/data/metrics'
    prometheus.io/port: '80'

See the configuration options for Kubernetes discovery in the prometheus docs (scroll down) for settings related to scraping over https and more.

Edit: I saw Emruz Hossain's answer only after I posted mine. His answer currently lacks the prometheus.io/scrape: 'true' annotation and specified = instead of : as annotations' name/value separator which is invalid in yaml or json.