2
votes

We have an Istio cluster and we are trying to configure horizontal pod autoscale for Kubernetes. We want to use the request count as our custom metric for hpa. How can we utilise Istio's Prometheus for the same purpose?

1
You can use custom metrics from Istio's Prometeus for the Kubernetes HPA. I'll try to reproduce this case and return to you with an answer.Artem Golenyaev
@ArtemGolenyaev Have you able to reproduce this?ratneshjd
sorry for delay, I'll reply soonArtem Golenyaev

1 Answers

3
votes

This question turned out to be much more complex than I expected, but finally here I am with the answer.

  1. First of all, you need to configure your application to provide custom metrics. It is on the developing application side. Here is an example, how to make it with Go language: Watching Metrics With Prometheus

  2. Secondly, you need to define and deploy a Deployment of the application (or a Pod, or whatever you want) to Kubernetes, example:

    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: podinfo
    spec:
      replicas: 2
      template:
        metadata:
          labels:
            app: podinfo
          annotations:
            prometheus.io/scrape: 'true'
        spec:
          containers:
          - name: podinfod
            image: stefanprodan/podinfo:0.0.1
            imagePullPolicy: Always
            command:
              - ./podinfo
              - -port=9898
              - -logtostderr=true
              - -v=2
            volumeMounts:
              - name: metadata
                mountPath: /etc/podinfod/metadata
                readOnly: true
            ports:
            - containerPort: 9898
              protocol: TCP
            readinessProbe:
              httpGet:
                path: /readyz
                port: 9898
              initialDelaySeconds: 1
              periodSeconds: 2
              failureThreshold: 1
            livenessProbe:
              httpGet:
                path: /healthz
                port: 9898
              initialDelaySeconds: 1
              periodSeconds: 3
              failureThreshold: 2
            resources:
              requests:
                memory: "32Mi"
                cpu: "1m"
              limits:
                memory: "256Mi"
                cpu: "100m"
          volumes:
            - name: metadata
              downwardAPI:
                items:
                  - path: "labels"
                    fieldRef:
                      fieldPath: metadata.labels
                  - path: "annotations"
                    fieldRef:
                      fieldPath: metadata.annotations
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: podinfo
      labels:
        app: podinfo
    spec:
      type: NodePort
      ports:
        - port: 9898
          targetPort: 9898
          nodePort: 31198
          protocol: TCP
      selector:
        app: podinfo
    

    Pay attention to the field annotations: prometheus.io/scrape: 'true'. It is required to request Prometheus to read metrics from the resource. Also note, there are two more annotations, which have default values; but if you change them in your application, you need to add them with the correct values:

    • prometheus.io/path: If the metrics path is not /metrics, define it with this annotation.
    • prometheus.io/port: Scrape the pod on the indicated port instead of the pod’s declared ports (default is a port-free target if none are declared).
  3. Next, Prometheus in Istio uses its own modified for Istio purposes configuration, and by default it skips custom metrics from Pods. Therefore, you need to modify it a little. In my case, I took configuration for Pod metrics from this example and modified Istio's Prometheus configuration only for Pods:

    kubectl edit configmap -n istio-system prometheus
    

    I changed the order of labels according to the example mentioned before:

    # pod's declared ports (default is a port-free target if none are declared).
    - job_name: 'kubernetes-pods'
      # if you want to use metrics on jobs, set the below field to
      # true to prevent Prometheus from setting the `job` label
      # automatically.
      honor_labels: false
      kubernetes_sd_configs:
      - role: pod
      # skip verification so you can do HTTPS to pods
      tls_config:
        insecure_skip_verify: true
      # make sure your labels are in order
      relabel_configs:
      # these labels tell Prometheus to automatically attach source
      # pod and namespace information to each collected sample, so
      # that they'll be exposed in the custom metrics API automatically.
      - source_labels: [__meta_kubernetes_namespace]
        action: replace
        target_label: namespace
      - source_labels: [__meta_kubernetes_pod_name]
        action: replace
        target_label: pod
      # these labels tell Prometheus to look for
      # prometheus.io/{scrape,path,port} annotations to configure
      # how to scrape
      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
        action: keep
        regex: true
      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
        action: replace
        target_label: __metrics_path__
        regex: (.+)
      - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
        action: replace
        regex: ([^:]+)(?::\d+)?;(\d+)
        replacement: $1:$2
        target_label: __address__
      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scheme]
        action: replace
        target_label: __scheme__
    

    After that, custom metrics appeared in Prometheus. But, be careful with changing Prometheus configuration, because some metrics required for Istio may disappear, check everything carefully.

  4. Now it is time to install Prometheus custom metric adapter.

    • Download this repository
    • Change the address for Prometheus server in the file <repository-directory>/deploy/manifests/custom-metrics-apiserver-deployment.yaml. Example, - --prometheus-url=http://prometheus.istio-system:9090/
    • Run command kubectl apply -f <repository-directory>/deploy/manifests After some time, custom.metrics.k8s.io/v1beta1 should appear in the output of a command 'kubectl api-vesions'.

    Also, check the output of the custom API using commands kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq . and kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/http_requests" | jq . The output of the last one should look like in the following example:

    {
      "kind": "MetricValueList",
      "apiVersion": "custom.metrics.k8s.io/v1beta1",
      "metadata": {
        "selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%2A/http_requests"
      },
      "items": [
        {
          "describedObject": {
            "kind": "Pod",
            "namespace": "default",
            "name": "podinfo-6b86c8ccc9-kv5g9",
            "apiVersion": "/__internal"
              },
              "metricName": "http_requests",
              "timestamp": "2018-01-10T16:49:07Z",
              "value": "901m"    },
            {
          "describedObject": {
            "kind": "Pod",
            "namespace": "default",
            "name": "podinfo-6b86c8ccc9-nm7bl",
            "apiVersion": "/__internal"
          },
          "metricName": "http_requests",
          "timestamp": "2018-01-10T16:49:07Z",
          "value": "898m"
        }
      ]
    }
    

    If it does, you can move to the next step. If it doesn’t, look what APIs available for Pods in CustomMetrics kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq . | grep "pods/" and for http_requests kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq . | grep "http". MetricNames are generating according to the metrics Prometheus gather from Pods and if they are empty, you need to look in that direction.

  5. The last step is the configuring HPA and test it. So in my case, I created HPA for the podinfo application, defined before:

    apiVersion: autoscaling/v2beta1
    kind: HorizontalPodAutoscaler
    metadata:
      name: podinfo
    spec:
      scaleTargetRef:
        apiVersion: extensions/v1beta1
        kind: Deployment
        name: podinfo
      minReplicas: 2
      maxReplicas: 10
      metrics:
      - type: Pods
        pods:
          metricName: http_requests
          targetAverageValue: 10
    

    and used simple Go application to test load:

    #install hey
    go get -u github.com/rakyll/hey
    #do 10K requests rate limited at 25 QPS
    hey -n 10000 -q 5 -c 5 http://<K8S-IP>:31198/healthz
    

    After some time, I saw changes in scaling by using commands kubectl describe hpa and kubectl get hpa

I used instruction about creating Custom Metrics from the article Ensure High Availability and Uptime With Kubernetes Horizontal Pod Autoscaler and Prometheus

All useful links in one place: