Can someone guide the configuration for auto discover for K8s. The Prometheus server is outside of the cluster. I tried Service Discovery With Kubernetes and someone mentioned in this discussion
I'm not yet a K8s expert enough to explain all the details here, but fundamentally it's perfectly possible to run Prometheus outside of the cluster (and required for things like redundant cross-cluster meta-monitoring). Cf. the
in_cluster
config option in http://prometheus.io/docs/operating/configuration/#kubernetes-sd-configurations-kubernetes_sd_config . You need to jump through certificate hoops if you run it outside.
So, I made a simple configuration
- job_name: 'kubernetes'
kubernetes_sd_configs:
-
# The API server addresses. In a cluster this will normally be
# `https://kubernetes.default.svc`. Supports multiple HA API servers.
api_servers:
- https://xxx.xx.xx.xx
# Run in cluster. This will use the automounted CA certificate and bearer
# token file at /var/run/secrets/kubernetes.io/serviceaccount/ in the pod.
in_cluster: false
# Optional HTTP basic authentication information.
basic_auth:
username: prometheus
password: secret
# Retry interval between watches if they disconnect.
retry_interval: 5s
Getting unknown fields in kubernetes_sd_config: api_servers, in_cluster, retry_interval"
or some other indentation errors
In sample configuration, they mentioned ca_file:
. How to get that certificate file from K8s or is there any way to specify K8s config
file(~/.kube/config)