2
votes

I have a GKE cluster with autoscaling enabled, and a single node pool. This node pool has a minimum of 1 node, and maximum of 5. When I have been testing the autoscaling of this cluster it has correctly scaled up (added a new node) when I added more replicas to my deployment. When I removed my deployment I would have expected it to scale down, but looking at the logs it is failing because it cannot evict the kube-dns deployment from the node:

reason: {
 messageId: "no.scale.down.node.pod.kube.system.unmovable"        
 parameters: [
  0: "kube-dns-7c976ddbdb-brpfq"         
 ]
}

kube-dns isn't being run as a daemonset, but I do not have any control over that as this is a managed cluster.

I am using Kubernetes 1.16.13-gke.1.

How can I make the cluster node pool scale down?

2
This behavior depend on the state of y our cluster. What have you installed on it? StatefulSet? DeamonSet? Other addon? - guillaume blaquiere
@guillaumeblaquiere I have only deployed a simple busybox deployment, with some requests and limits. I have no other deployments, and when I want it to test the scale down I completely remove this deployment. The kube-dns has been deployment by GKE and is not running as a daemon set. - rj93

2 Answers

3
votes

The autoscaler will not evict pods from the kube-system namespace unless they are a daemonset OR they have a PodDisruptionBudget.

For kube-dns, as well as kube-dns-autoscaler, and a few other GKE managed deployment in kube-dns, you need to add a poddisruptionbudget.

e.g:

apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  annotations:
  labels:
    k8s-app: kube-dns
  name: kube-dns-bbc
  namespace: kube-system
spec:
  maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
1
votes

I found this github issue, where it specifies that you need to add a taint to the node pool. I have done this and then the node pool is auto scaled down to zero.

Documentation can be found here.