16
votes

I have installed a local instance of Kubernetes via Docker on my Mac.

Following the walkthrough on how to activate autoscaling on a deployment I have experienced an issue. The autoscaler can't read the metrics.

When I am running kubectl describe hpa the current cpu usage comes back as unknown / 50% with the warnings:

Warning FailedGetResourceMetric: horizontal-pod-autoscaler unable to get metrics for resource cpu: unable to fetch metrics from API: the server could not find the requested resource (get pods.metrics.k8s.io)

Warning FailedComputeMetricsReplicas horizontal-pod-autoscaler failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from API: the server could not find the requested resource (get pods.metrics.k8s.io)

I have installed the metrics-server via git clone https://github.com/kubernetes-incubator/metrics-server.gitand installed it with kubectl create -f deploy/1.8+

5

5 Answers

33
votes

I finally got it working.. Here are the full steps I took to get things working:

  1. Have Kubernetes running within Docker

  2. Delete any previous instance of metrics-server from your Kubernetes instance with kubectl delete -n kube-system deployments.apps metrics-server

  3. Clone metrics-server with git clone https://github.com/kubernetes-incubator/metrics-server.git

  4. Edit the file deploy/1.8+/metrics-server-deployment.yaml to override the default command by adding a command section that didn't exist before. The new section will instruct metrics-server to allow for an insecure communications session (don't verify the certs involved). Do this only for Docker, and not for production deployments of metrics-server:

    containers:
    - name: metrics-server
        image: k8s.gcr.io/metrics-server-amd64:v0.3.1
        command:
          - /metrics-server
          - --kubelet-insecure-tls
    
  5. Add metrics-server to your Kubernetes instance with kubectl create -f deploy/1.8+ (if errors with the .yaml, write this instead: kubectl apply -f deploy/1.8+)

  6. Remove and add the autoscaler to your deployment again. It should now show the current cpu usage.

EDIT July 2020:

Most of the above steps hold true except the metrics-server has changed and that file does not exist anymore.

The repo now recommends installing it like this:

apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml

So we can now download this file,

curl -L https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml --output components.yaml

add --kubelet-insecure-tls under args (L88) to the metrics-server deployment and run

kubectl apply -f components.yaml
8
votes

For who are use Internal-IP here may work for you. Follow @Mr.Turtle above at step 4. add more one command.

  containers:
  - name: metrics-server
    image: k8s.gcr.io/metrics-server-amd64:v0.3.3
    command:
      - /metrics-server
      - --kubelet-insecure-tls
      - --kubelet-preferred-address-types=InternalIP
4
votes

We upgraded to AWS EKS version 1.13.7 and that's when we started having problems with HPA, It turns out on my deployment I had to specified a value for resources.requests.cpu=200m and the HPA started working for me.

0
votes

Had same issue while using my kubernetes kubeadm lab and the updated procedure is here https://github.com/kubernetes-sigs/metrics-server

This solved the issue: horizontal-pod-autoscaler unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io)

0
votes

If someone still has problems fixing this issue, this helped me fix it on minikube:

I had 2 deployments with the same label, something like this:

kind: Deployment
metadata:
  name: webserver
spec:
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web

---

kind: Deployment
metadata:
  name: database
spec:
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web

I renamed the label and matchLabels of the database (e.g. to app: db), then deleted both deployments and applied the new config - et voilĂ  it worked. (after hours of trying to solve the problem..)

Further informations to this issue: https://github.com/kubernetes/kubernetes/issues/79365