3
votes

I am trying to set up a kubernetes HA cluster with 5 master nodes by following kubernetes documentation https://kubernetes.io/docs/setup/independent/high-availability/.

I have installed docker 1.13 and kubeadm, kubectl, and kubelet version 1.11.2 on the first master.

Downloaded all the required images on to all master nodes and initiated kubeadm on master node 1, kubelet is running no errors and created etcd cluster on master node 1.

I have copied all the required config and cert files to rest of the master nodes and initiated kubeadm on master node and started kubelet service. kubelet ran successfully on master node 2 and added etcd to the existing cluster.

But when I start the kubelet on master node 3, it's deleting all the docker images from master node 3 except pause image and was not able to create etcd or any kube-* pods and failing to join the third node in the cluster.

Same with the other two nodes.

Can anyone help me in resolving this issue?

Thanks in advance.

1
Can you post any errors you see in the kubelet logs?Rico
Hi Rico, there is no container running on master-node 3 and the kubelet logs container network runtime not ready. failed to admit pod calico-node and kube-proxy.Raghu.k
Is the hardware/devices for master-node 3 exactly the same as the other nodes? It looks like your pods are getting evicted.Rico
yes it's exactly same i have configured them with same configuration and i have used HA proxy as a load balancer in front of them.Raghu.k
What do you see on node 3 after you type journalctl -xeu kubelet? that's your kubelet logRico

1 Answers

0
votes

As @Raghu.k mentioned in his last comment, the problem with master node 3 was occurring due to the lack of free space on this Node; however, the recreation of the affected Node has resolved this issue. Flagged as a community wiki for further community researches.