0
votes

Kubernetes version - 1.12.4 Docker version - 18.06.1-ce OS - CentOS Linux release 7.5.1804 (Core)

Everything is working fine, but when I restart kubelet service, we are getting below logs in kubelet logs, node status change to not-ready. It remains for next 3 minutes. We observed this on 1.11.x and 1.12.x, did not try yet 1.13.x. We are getting this issue on all k8s cluster nodes. There is no load on node(cpu/mem/iowait), all are fine.

kubelet.go:1821] skipping pod synchronization - [container runtime is down] kubelet.go:1821] skipping pod synchronization - [container runtime is down]

3

3 Answers

0
votes

Why do you need to restart kubelet?

This happens because while you are restarting kubelet - it simply can't get the proper status of your container runtime. As a result providing you container runtime is down despite the fact that your container runtime is up and running.

Mar 12 13:51:13 kube-calico-2 kubelet[11597]: I0312 13:51:13.429889   11597 setters.go:518] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2019-03-12 13:51:13.429850911 +0000 UTC m=+0.652556738 LastTransitionTime:2019-03-12 13:51:13.429850911 +0000 UTC m=+0.652556738 Reason:KubeletNotReady Message:container runtime status check may not have completed yet}
Mar 12 13:51:13 kube-calico-2 kubelet[11597]: I0312 13:51:13.483669   11597 kubelet.go:1846] skipping pod synchronization - [container runtime status check may not have completed yet]
Mar 12 13:51:13 kube-calico-2 kubelet[11597]: I0312 13:51:13.884530   11597 kubelet.go:1846] skipping pod synchronization - [container runtime status check may not have completed yet]
0
votes

I too hit the issue with Kubernetes 1.14.3 and the workaround was to set the kubelet setting node-status-update-frequency to 30s

-1
votes

try killing the process of the container that cause kubelet timeout with command docker stats/inspect