0
votes

I've written a node exporter in golang named "my-node-exporter" with some collectors to show metrics. From my cluster, I can view my metrics just fine with the following:

kubectl port-forward my-node-exporter-999b5fd99-bvc2c 9090:8080 -n kube-system
localhost:9090/metrics

However when I try to view my metrics within the prometheus dashboard

kubectl port-forward prometheus-prometheus-operator-158978-prometheus-0 9090
localhost:9090/graph

my metrics are nowhere to be found and I can only see default metrics. Am I missing a step for getting my metrics on the graph?


Here are the pods in my default namespace which has my prometheus stuff in it.

pod/alertmanager-prometheus-operator-158978-alertmanager-0            2/2     Running   0          85d
pod/grafana-1589787858-fd7b847f9-sxxpr                                1/1     Running   0          85d
pod/prometheus-operator-158978-operator-75f4d57f5b-btwk9              2/2     Running   0          85d
pod/prometheus-operator-1589787700-grafana-5fb7fd9d8d-2kptx           2/2     Running   0          85d
pod/prometheus-operator-1589787700-kube-state-metrics-765d4b7bvtdhj   1/1     Running   0          85d
pod/prometheus-operator-1589787700-prometheus-node-exporter-bwljh     1/1     Running   0          85d
pod/prometheus-operator-1589787700-prometheus-node-exporter-nb4fv     1/1     Running   0          85d
pod/prometheus-operator-1589787700-prometheus-node-exporter-rmw2f     1/1     Running   0          85d
pod/prometheus-prometheus-operator-158978-prometheus-0                3/3     Running   1          85d

I used helm to install prometheus operator.

EDIT: adding my yaml file

# Configuration to deploy
#
# example usage: kubectl create -f <this_file>

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: my-node-exporter-sa
  namespace: kube-system

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: my-node-exporter-binding
subjects:
  - kind: ServiceAccount
    name: my-node-exporter-sa
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: my-node-exporter-role
  apiGroup: rbac.authorization.k8s.io

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: my-node-exporter-role
rules:
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]

---
#####################################################
############  Service ############
#####################################################

kind: Service
apiVersion: v1
metadata:
  name: my-node-exporter-svc
  namespace: kube-system
  labels:
    app: my-node-exporter
spec:
  ports:
    - name: my-node-exporter
      port: 8080
      targetPort: metrics
      protocol: TCP
  selector:
    app: my-node-exporter

---
#########################################################
############   Deployment  ############
#########################################################

kind: Deployment
apiVersion: apps/v1
metadata:
  name: my-node-exporter
  namespace: kube-system
spec:
  selector:
    matchLabels:
      app: my-node-exporter
  replicas: 1
  template:
    metadata:
      labels:
        app: my-node-exporter
    spec:
      serviceAccount: my-node-exporter-sa
      containers:
        - name: my-node-exporter
          image: locationofmyimagehere
          args:
            - "--telemetry.addr=8080"
            - "--telemetry.path=/metrics"
          imagePullPolicy: Always
          ports:
            - containerPort: 8080
          volumeMounts:
            - name: log-dir
              mountPath: /var/log
      volumes:
        - name: log-dir
          hostPath:
            path: /var/log

Service monitor yaml

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: my-node-exporter-service-monitor
  labels:
    app: my-node-exporter-service-monitor
spec:
  selector:
    matchLabels:
      app: my-node-exporter
    matchExpressions:
      - {key: app, operator: Exists}
  endpoints:
  - port: my-node-exporter
  namespaceSelector:
    matchNames:
    - default
    - kube-system

Prometheus yaml

# Prometheus will use selected ServiceMonitor
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
  name: my-node-exporter
  labels:
    team: frontend
spec:
  serviceMonitorSelector:
      matchLabels:
        app: my-node-exporter
      matchExpressions:
      - key: app
        operator: Exists
2

2 Answers

0
votes

You need to explicitly tell Prometheus what metrics to collect - and where from - by firstly creating a Service that points to your my-node-exporter pods (if you haven't already), and then a ServiceMonitor, as described in the Prometheus Operator docs - search for the phrase "This Service object is discovered by a ServiceMonitor".

0
votes

Getting Deployment/Service/ServiceMonitor/PrometheusRule working in PrometheusOperator needs great caution.
So I created a helm chart repo kehao95/helm-prometheus-exporter to install any prometheus-exporters, including your customer exporter, you can try it out.
It will create not only the exporter Deployment but also Service/ServiceMonitor/PrometheusRule for you.

  • install the chart
helm repo add kehao95 https://kehao95.github.io/helm-prometheus-exporter/
  • create an value file my-exporter.yaml for kehao95/prometheus-exporter
exporter: 
  image: your-exporter
  tag: latest
  port: 8080
  args:
  - "--telemetry.addr=8080"
  - "--telemetry.path=/metrics"
  • install it with helm
helm install --namespace yourns my-exporter kehao95/prometheus-exporter -f my-exporter.yaml

Then you should see your metrics in prometheus.