I have a GKE cluster with autoscaling enabled, and a single node pool. This node pool has a minimum of 1 node, and maximum of 5. When I have been testing the autoscaling of this cluster it has correctly scaled up (added a new node) when I added more replicas to my deployment. When I removed my deployment I would have expected it to scale down, but looking at the logs it is failing because it cannot evict the kube-dns deployment from the node:
reason: {
messageId: "no.scale.down.node.pod.kube.system.unmovable"
parameters: [
0: "kube-dns-7c976ddbdb-brpfq"
]
}
kube-dns isn't being run as a daemonset, but I do not have any control over that as this is a managed cluster.
I am using Kubernetes 1.16.13-gke.1.
How can I make the cluster node pool scale down?