2
votes

In the official Kubernetes documentation:

https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/

We can see the following:

This example requires a running Kubernetes cluster and kubectl, version 1.2 or later. Metrics server monitoring needs to be deployed in the cluster to provide metrics through the Metrics API. Horizontal Pod Autoscaler uses this API to collect metrics. To learn how to deploy the metrics-server, see the metrics-server documentation. To specify multiple resource metrics for a Horizontal Pod Autoscaler, you must have a Kubernetes cluster and kubectl at version 1.6 or later. To make use of custom metrics, your cluster must be able to communicate with the API server providing the custom Metrics API. Finally, to use metrics not related to any Kubernetes object you must have a Kubernetes cluster at version 1.10 or later, and you must be able to communicate with the API server that provides the external Metrics API. See the Horizontal Pod Autoscaler user guide for more details.

In order to verify I can "make use of custom metrics", I ran:

kubectl get metrics-server

And got the result: error: the server doesn't have a resource type "metrics-server"

May I ask what can I do to verify "Metrics server monitoring needs to be deployed in the cluster" please?

Thank you

1

1 Answers

3
votes

The actual behavior behind the kubectl is to send an API request to a particular endpoint in the Kubernetes API server. There are a couple of predefined objects coming along with kubectl. But if you have some endpoints that are not defined with kubectl, you can use the flag --raw to send the request to API server.

In your case, you can checkout the built-in metrics with this command.

> kubectl get --raw  /apis/metrics.k8s.io
{"kind":"APIGroup","apiVersion":"v1","name":"metrics.k8s.io","versions":[{"groupVersion":"metrics.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"metrics.k8s.io/v1beta1","version":"v1beta1"}}

You will get the JSON response from kubectl. Then, you can follow the path under the response to query your target resources. In my case, in order to get the actual metrics, I will need to use this command.

> kubectl get --raw /apis/metrics.k8s.io/v1beta1/pods

For this metrics endpoint, it refers to the built-in metrics. They are CPU and memory. If you want to use the custom metrics, you will need to install the prometheus, prometheus adaptor and corresponding exporter depending on your application. For the custom metrics verification, you can go to the following endpoint.

> kubectl get --raw /apis/custom.metrics.k8s.io