I'm running a three node cluster on GCE. I want to drain one node and delete the underlying VM.
Documentation for kubectl drain
command says:
Once it returns (without giving an error), you can power down the node (or equivalently, if on a cloud platform, delete the virtual machine backing the node)
I execute the following commands:
Get the nodes
$ kl get nodes NAME STATUS AGE gke-jcluster-default-pool-9cc4e660-6q21 Ready 43m gke-jcluster-default-pool-9cc4e660-rx9p Ready 6m gke-jcluster-default-pool-9cc4e660-xr4z Ready 23h
Drain node
rx9p
.$ kl drain gke-jcluster-default-pool-9cc4e660-rx9p --force node "gke-jcluster-default-pool-9cc4e660-rx9p" cordoned WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: fluentd-cloud-logging-gke-jcluster-default-pool-9cc4e660-rx9p, kube-proxy-gke-jcluster-default-pool-9cc4e660-rx9p node "gke-jcluster-default-pool-9cc4e660-rx9p" drained
Delete gcloud VM.
$ gcloud compute instances delete gke-jcluster-default-pool-9cc4e660-rx9p
List VMs.
$ gcloud compute instances list
In the result, I'm seeing the VM I deleted above -
rx9p
. If I dokubectl get nodes
, I'm seeing the rx9p node too.
What's going on? Something is restarting the VM I'm deleting? Do I have to wait for some timeout between the commands?