15
votes

After creating a simple hello world deployment, my pod status shows as "PENDING". When I run kubectl describe pod on the pod, I get the following:

Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  14s (x6 over 29s)  default-scheduler  0/1 nodes are available: 1 NodeUnderDiskPressure.

If I check on my node health, I get:

Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Fri, 27 Jul 2018 15:17:27 -0700   Fri, 27 Jul 2018 14:13:33 -0700   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Fri, 27 Jul 2018 15:17:27 -0700   Fri, 27 Jul 2018 14:13:33 -0700   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     True    Fri, 27 Jul 2018 15:17:27 -0700   Fri, 27 Jul 2018 14:13:43 -0700   KubeletHasDiskPressure       kubelet has disk pressure
  Ready            True    Fri, 27 Jul 2018 15:17:27 -0700   Fri, 27 Jul 2018 14:13:43 -0700   KubeletReady                 kubelet is posting ready status. AppArmor enabled

So it seems the issue is that "kubelet has disk pressure" but I can't really figure out what that means. I can't SSH into minikube and check on its disk space because I'm using VMWare Workstation with --vm-driver=none.

4
kubernetes.io/docs/concepts/architecture/nodes describes the statuses. I don't know that you can resolve it without getting an admin shell on the node somehow, unless you're content to destroy and recreate the node. In short this just sounds like "you're trying to fit too much on the one VM".David Maze
Sorry, what does it mean to get "an admin shell on the node"?Imran
The Kubernetes Node object represents some piece of computer hardware (or a VM). So you need a root shell on the VM so you can run administrative commands like df and docker images. If you can't ssh into it, maybe you can directly access its console.David Maze
Read minikube docs you can bash in to the node. You didn't give it enough disk space, open up your VM workstation app and see what it has for a diskLev Kuznetsov

4 Answers

12
votes

This is an old question but I just saw it and because it doesn't have an naswer yet I will write my answer.

I was facing this problem and my pods were getting evicted many times because of disk pressure and different commands such as df or du were not helpful.

With the help of the answer that I wrote at https://serverfault.com/a/994413/509898 I found out that the main problem is the log files of the pods and because K8s is not supporting log rotation they can grow to hundreds of Gigs.

There are different log rotation methods available but I currently I am searching for the best practice for K8s so I can't suggest any specific one, yet.

I hope this can be helpful.

0
votes

Personally I couldn't solve the problem using kube commands because ...
It was said to be due to an antivirus (McAfee). Reinstalling the company-endorsed docker-desktop version solved the problem.

0
votes

Had the similar issue.

My_error_log : Warning FailedScheduling 3m23s default-scheduler 0/3 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had taint {node-role.kubernetes.io/controlplane: true}, that the pod didn't tolerate, 1 node(s) had taint {node.kubernetes.io/disk-pressure: }, that the pod didn't tolerate

For me, the / partition was filled to 82%. Cleaning up some unwanted folders resolved the issue. Command used:-

  1. ssh uname@IP_or_hostname (login to the worker node )
  2. df -h (to check the disk usage)
  3. rm -rf folder_name (delete the unwanted folder,you are forcefully deleting the file, so make sure you really want to delete it).

I hope this can save someone's time.

-3
votes

Community hinted you the comments above. Will try to consolidate it.

The kubelet maps one or more eviction signals to a corresponding node condition.

If a hard eviction threshold has been met, or a soft eviction threshold has been met independent of its associated grace period, the kubelet reports a condition that reflects the node is under pressure.

DiskPressure

Available disk space and inodes on either the node’s root filesystem or image filesystem has satisfied an eviction threshold

So the problem might be not enough disk space or filesystem has run out of inodes. You have to learn about the conditions of your environment and then apply them in your kubelet configuration.

You do not need to ssh into the minikube since you are running it inside of your host: --vm-driver=none -

option that runs the Kubernetes components on the host and not in a VM. Docker is required to use this driver but no hypervisor. If you use --vm-driver=none, be sure to specify a bridge network for docker. Otherwise it might change between network restarts, causing loss of connectivity to your cluster.

You might try to check if there are some issues related to the mentioned topics:

kubectl describe nodes

Look at df reports:

df -i
df -h

Some further reading so you can grasp the topic: Configure Out Of Resource Handling - section Node Conditions.