We want our Prometheus installation to scrape the metrics of both containers within a pod.
One container exposes the metrics via HTTPS at port 443, whereas the other container exposes them via HTTP at port 8080. Both containers provide the metrics at the same path, namely /metrics
.
If we declare the prometheus.io/scheme to be either http or https, only one container will be scraped. For the other one we always receive: server returned HTTP status 400 Bad Request
The same happens if we do not define the prometheus.io/scheme at all. Prometheus will then use http for both ports, and fail for the container that exposes the metrics at port 443 as it would expect HTTPS requests only.
Is there a way to tell prometheus how exactly it shall scrape the individual containers within our deployment? What are feasible workarounds to acquire the metrics of both containers?
Versions
Kubernetes: 1.10.2
Prometheus: 2.2.1
Deployment excerpt
apiVersion: apps/v1
kind: Deployment
metadata:
name: xxx
namespace: xxx
spec:
selector:
matchLabels:
app: xxx
template:
metadata:
labels:
app: xxx
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: "/metrics"
spec:
containers:
- name: container-1
image: xxx
ports:
- containerPort: 443
- name: container-2
image: xxx
ports:
- containerPort: 8080
Prometheus configuration:
- job_name: kubernetes-pods
scrape_interval: 1m
scrape_timeout: 10s
metrics_path: /metrics
scheme: http
kubernetes_sd_configs:
- api_server: null
role: pod
namespaces:
names: []
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
separator: ;
regex: "true"
replacement: $1
action: keep
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
separator: ;
regex: (.+)
target_label: __metrics_path__
replacement: $1
action: replace
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
separator: ;
regex: ([^:]+)(?::\d+)?;(\d+)
target_label: __address__
replacement: $1:$2
action: replace
- separator: ;
regex: __meta_kubernetes_pod_label_(.+)
replacement: $1
action: labelmap
- source_labels: [__meta_kubernetes_namespace]
separator: ;
regex: (.*)
target_label: kubernetes_namespace
replacement: $1
action: replace
- source_labels: [__meta_kubernetes_pod_name]
separator: ;
regex: (.*)
target_label: kubernetes_pod_name
replacement: $1
action: replace
haproxy
) to smooth over the protocol imbalance between the two current containers? I think that you can name the sidecar containercontainer-1
, change the currentcontainer-1
to something else, and then (from Prometheus's PoV) the metrics will appear with their correct name, and only you would know about the trickery. I didn't see anything in the discovery source that would allow the fine-grained control you're describing – mdaniel