344
votes

I tried to delete a ReplicationController with 12 pods and I could see that some of the pods are stuck in Terminating status.

My Kubernetes cluster consists of one control plane node and three worker nodes installed on Ubuntu virtual machines.

What could be the reason for this issue?

NAME        READY     STATUS        RESTARTS   AGE
pod-186o2   1/1       Terminating   0          2h
pod-4b6qc   1/1       Terminating   0          2h
pod-8xl86   1/1       Terminating   0          1h
pod-d6htc   1/1       Terminating   0          1h
pod-vlzov   1/1       Terminating   0          1h
17
Are the scheduler and controller-manager running?Antoine Cotten

17 Answers

650
votes

You can use following command to delete the POD forcefully.

kubectl delete pod <PODNAME> --grace-period=0 --force --namespace <NAMESPACE>
73
votes

Force delete the pod:

kubectl delete pod --grace-period=0 --force --namespace <NAMESPACE> <PODNAME>

The --force flag is mandatory.

36
votes

The original question is "What could be the reason for this issue?" and the answer is discussed at https://github.com/kubernetes/kubernetes/issues/51835 & https://github.com/kubernetes/kubernetes/issues/65569 & see https://www.bountysource.com/issues/33241128-unable-to-remove-a-stopped-container-device-or-resource-busy

Its caused by docker mount leaking into some other namespace.

You can logon to pod host to investigate.

minikube ssh
docker container ps | grep <id>
docker container stop <id> 
23
votes

Delete the finalizers block from resource (pod,deployment,ds etc...) yaml:

"finalizers": [
  "foregroundDeletion"
]
23
votes

I found this command more straightforward:

for p in $(kubectl get pods | grep Terminating | awk '{print $1}'); do kubectl delete pod $p --grace-period=0 --force;done

It will delete all pods in Terminating status in default namespace.

13
votes

Practical answer -- you can always delete a terminating pod by running:

kubectl delete pod NAME --grace-period=0

Historical answer -- There was an issue in version 1.1 where sometimes pods get stranded in the Terminating state if their nodes are uncleanly removed from the cluster.

12
votes

In my case the --force option didn't quite work. I could still see the pod ! It was stuck in Terminating/Unknown mode. So after running

kubectl delete pods <pod> -n redis --grace-period=0 --force

I ran

kubectl patch pod <pod> -p '{"metadata":{"finalizers":null}}'
6
votes

If --grace-period=0 is not working then you can do:

kubectl delete pods <pod> --grace-period=0 --force
4
votes

I stumbled upon this recently when removing rook ceph namespace - it got stuck in Terminating state.

The only thing that helped was removing kubernetes finalizer by directly calling k8s api with curl as suggested here.

  • kubectl get namespace rook-ceph -o json > tmp.json
  • delete kubernetes finalizer in tmp.json (leave empty array "finalizers": [])
  • run kubectl proxy in another terminal for auth purposes and run following curl request to returned port
  • curl -k -H "Content-Type: application/json" -X PUT --data-binary @tmp.json 127.0.0.1:8001/k8s/clusters/c-mzplp/api/v1/namespaces/rook-ceph/finalize
  • namespace is gone

Detailed rook ceph teardown here.

4
votes

I stumbled upon this recently to free up resource in my cluster. here is the command to delete them all.

kubectl get pods --all-namespaces | grep Terminating | while read line; do 
pod_name=$(echo $line | awk '{print $2}' ) name_space=$(echo $line | awk 
'{print $1}' ); kubectl delete pods $pod_name -n $name_space --grace-period=0 --force; 
done

hope this help someone who read this

3
votes

I'd not recommend force deleting pods unless container already exited.

  1. Verify kubelet logs to see what is causing the issue "journalctl -u kubelet"
  2. Verify docker logs: journalctl -u docker.service
  3. Check if pod's volume mount points still exist and if anyone holds lock on it.
  4. Verify if host is out of memory or disk
3
votes

Before doing a force deletion i would first do some checks. 1- node state: get the node name where your node is running, you can see this with the following command:

"kubectl -n YOUR_NAMESPACE describe pod YOUR_PODNAME"

Under the "Node" label you will see the node name. With that you can do:

kubectl describe node NODE_NAME

Check the "conditions" field if you see anything strange. If this is fine then you can move to the step, redo:

"kubectl -n YOUR_NAMESPACE describe pod YOUR_PODNAME"

Check the reason why it is hanging, you can find this under the "Events" section. I say this because you might need to take preliminary actions before force deleting the pod, force deleting the pod only deletes the pod itself not the underlying resource (a stuck docker container for example).

3
votes

please try below command : kubectl patch pod -p '{"metadata":{"finalizers":null}}'

2
votes

One reason WHY this happens can be turning off a node (without draining it). Fix in this case is to turn on the node again; then termination should succeed.

1
votes

for my case, i don't like workaround. So there are steps :

  • k get pod -o wide -> this will show which Node is running the pod
  • k get nodes -> Check status of that node... I got it NotReady

I went and i fixed that node.. for my case, it's just restart kubelet :

  • ssh that-node -> run swapoff -a && systemctl restart kubelet

now deletion of pod should work without forcing the Poor pod.

0
votes

you can use awk :

kubectl get pods --all-namespaces | awk '{if ($4=="Terminating") print "oc delete pod " $2 " -n " $1 " --force --grace-period=0 ";}' | sh
0
votes

Force delete ALL pods in namespace:

kubectl delete pods --all -n <namespace> --grace-period 0 --force