5
votes

I'm running the kuberenets cluster on bare metal servers and my cluster nodes keep added and removed regularly. But when a node is removed, kubernetes does not remove it automatically from nodes list and kubectl get nodes keep showing NotReady nodes. Is there any automated way to achieve this? I want similar behavior for nodes as kubernetes does for pods.

2
I believe that the cluster-autoscaler does that, so you might try running it or look around in the source and see how to make your own controller which does the same thingmdaniel
which pod network add-on are you using?Anshul Jindal

2 Answers

2
votes

to remove a node follow the below steps

Run on Master
# kubectl cordon <node-name>
# kubectl drain <node-name> --force --ignore-daemonsets  --delete-emptydir-data
# kubectl delete node <node-name>
0
votes

You can use this little bash command, or set it as a cron-job.

kubectl delete node $(kubectl get nodes | grep NotReady | awk '{print $1;}')