I am deploying prometheus using stable/prometheus-operator chart. It is installed in monitoring namespace. In the default namespace I have a pod running named my-pod with three replicas. This pod spits out metrics on port 9009 (I have verified this by doing k port-forward and validating the metrics show up in localhost:9009). I would like prometheus-operator to scrape these metrics. So I added the configuration below to values.yaml
prometheus:
prometheusSpec:
additionalScrapeConfigs:
- job_name: 'my-pod-job'
scrape_interval: 15s
kubernetes_sd_configs:
- role: pod
namespaces:
names:
- default
relabel_configs:
- source_labels: [__meta_kubernetes_pod_name]
action: keep
regex: 'my-pod'
I then install prometheus using the command below:
helm upgrade --install prometheus stable/prometheus-operator \
--set kubeEtcd.enabled=false \
--set kubeControllerManager.enabled=false \
--set kubeScheduler.enabled=false \
--set prometheusOperator.createCustomResource=true \
--set grafana.smtp.existingSecret=smtp-secret \
--set kubelet.serviceMonitor.https=true \
--set kubelet.enabled=true \
-f values.yaml --namespace monitoring
However, when I go to /service-discover I see
my-pod-job (0/40 active targets)
Question
How can I configure prometheus such that it scrapes metrics from pods running in default namespace and spitting out metrics on port 9009?