1
votes

Kubernetes nodes are getting unscheduled while i initiate the drain or cordon but the pods which is available on the node are not getting moved to different node immediately ?

i mean, these pods are not created by daemonset.

So, how come, Application running pod can make 100% available when a node getting faulty or with some issues ?

any inputs ?

command used :

To drain / cordon to make the node unavailable:

kubectl drain node1
kubectl cordon node1

To check the node status :

kubectl get nodes

To check the pod status before / after cordon or drain :

kubectl get pods -o wide
kubectl describe pod <pod-name>

Surprising part is , even node is unavailable, the pod status showing always running. :-)

1

1 Answers

0
votes

Pods by itself doesn't migrate to another node.

You can use workload resources to create and manage multiple Pods for you. A controller for the resource handles replication and rollout and automatic healing in case of Pod failure. For example, if a Node fails, a controller notices that Pods on that Node have stopped working and creates a replacement Pod. The scheduler places the replacement Pod onto a healthy Node.

Some examples of controllers are:

Check this link to more information.