1
votes

I'm trying to enhance my monitoring and want to expand the amount of metrics pulled into Prometheus from our Kube estate. We already have a stand alone Prom implementation which has a hard coded config file monitoring some bare metal servers, and hooks into cadvisor for generic Pod metrics.

What i would like to do is configure Kube to monitor the apache_exporter metrics from a webserver deployed in the cluster, but also dynamically add a 2nd, 3rd etc webserver as the instances are scaled up.

I've looked at the kube-prometheus project, but this seems to be more geared to instances where there is no established Prometheus deployed. Is there a simple way to get prometheus to scrape the Kube API or etcd to pull in the current list of pods which match a certain criteria (ie, a tag like deploymentType=webserver) and scrape the apache_exporter metrics for these pods, and scrape the mysqld_exporter metrics where deploymentType=mysql

1

1 Answers

0
votes

There's a project called kube-prometheus-stack (formerly prometheus-operator): https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack

It has concepts called ServiceMonitor and PodMonitor:

Basically, this is a selector that points your Prometheus instance to scrape targets. In the case of service selector, it discovers all the pods behind the service. In the case of a pod selector, it discovers pods directly. Prometheus scrape config is updated and reloaded automatically in both cases.

Example PodMonitor:

apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
  name: example
  namespace: monitoring
spec:
  podMetricsEndpoints:
  - interval: 30s
    path: /metrics
    port: http
  namespaceSelector:
    matchNames:
    - app
  selector:
    matchLabels:
      app.kubernetes.io/name: my-app

Note that this PodMonitor object itself must be discovered by the controller. To achieve this you write a PodMonitorSelector(link). This additional explicit linkage is done intentionally - in this way, if you have 2 Prometheus instances on your cluster (say Infra and Product) you can separate which Prometheus will get which Pods to its scraping config.

The same applies to a ServiceMonitor.