59
votes

I have 3 nodes in kubernetes cluster. I create a daemonset and deployed it in all the 3 devices. This daemonset created 3 pods and they were successfully running. But for some reasons, one of the pod failed.

I need to know how can we restart this pod without affecting other pods in the daemon set, also without creating any other daemon set deployment?

Thanks

4
I'm a bit confused by "deployed it in all the 3 devices". Normally you create a daemonset with e.g. kubectl through the API server and then kubernetes takes care of creating pods on each node (device). The failed pod should also automatically get replaced by a new one. Could you please add the yaml definition of your daemonset to this question? And the output of kubectl describe pod for the failed pod would help. You can list terminated pods with kubectl get pod -aThomas Koch

4 Answers

92
votes

kubectl delete pod <podname> it will delete this one pod and Deployment/StatefulSet/ReplicaSet/DaemonSet will reschedule a new one in its place

53
votes

There are other possibilities to acheive what you want:

  • Just use rollout command

    kubectl rollout restart deployment mydeploy

  • You can set some environment variable which will force your deployment pods to restart:

    kubectl set env deployment mydeploy DEPLOY_DATE="$(date)"

  • You can scale your deployment to zero, and then back to some positive value

    kubectl scale deployment mydeploy --replicas=0
    kubectl scale deployment mydeploy --replicas=1
12
votes

Just for others reading this...

A better solution (IMHO) is to implement a liveness prob that will force the pod to restart the container if it fails the probe test.

This is a great feature K8s offers out of the box. This is auto healing.

Also look into the pod lifecycle docs.

1
votes

kubectl -n <namespace> delete pods --field-selector=status.phase=Failed

I think the above command is quite useful when you want to restart 1 or more failed pods :D

And we don't need to care about name of the failed pod.