0
votes

I have a Kubernetes cluster deployed on GCP with a single node, 4 CPU's and 15GB memory. There are a few pods with all the pods bound to the persistent volume by a persistent volume claim. I have observed that the pods have restarted automatically and the data in the persistent volume is lost.

After some research, I suspect that this could be because of the pod eviction policy. When I used kubectl describe pod , I noticed the below error.

0/1 nodes are available: 1 node(s) were not ready, 1 node(s) were out of disk space, 1 node(s) were unschedulable.

The restart policy of my pods is "always". So I think that the pods have restarted after being resource deprived.

How do I identify the pod eviction policy of my cluster and change it? so that this does not happen in the future

1

1 Answers

2
votes

pod eviction policy of my cluster and change

These thresholds ( pod eviction) are flags of kubelet, you can tune these values according to your requirement. you can edit the kubelet config file, here is the detail config-file

Dynamic Kubelet Configuration allows you to edit these values in the live cluster

The restart policy of my pods is "always". So I think that the pods have restarted after being resource deprived.

Your pod has been rescheduled due to node's issue (not enough disk space ).

The restart policy of my pods is "always".

It means if the pod is not up and running then try to restart it .