1
votes

I am using Ubuntu bionic (18.04) with latest version of kubeadm on Ubuntu's repositories (1.13.4) and calico 3.6, following their documentation for "Installing with the Kubernetes API datastore—50 nodes or less" (https://docs.projectcalico.org/v3.6/getting-started/kubernetes/installation/calico).

It was started with:

sudo kubeadm init --pod-network-cidr=192.168.0.0/16

But when I apply calico.yaml my node gets stuck with the condition:

Conditions: Type Status LastHeartbeatTime
LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Mon, 15 Apr 2019 20:24:43 -0300 Mon, 15 Apr 2019 20:21:20 -0300 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Mon, 15 Apr 2019 20:24:43 -0300 Mon, 15 Apr 2019 20:21:20 -0300
KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 15 Apr 2019 20:24:43 -0300 Mon, 15 Apr 2019 20:21:20 -0300 KubeletHasSufficientPID kubelet has sufficient PID available Ready False Mon, 15 Apr 2019 20:24:43 -0300 Mon, 15 Apr 2019 20:21:20 -0300 KubeletNotReady
runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

When I see the system pods (kubectl get pods -n kube-system) I get:

NAME                                       READY   STATUS     RESTARTS   AGE
calico-kube-controllers-55df754b5d-zsttg   0/1     Pending    0          34s
calico-node-5n6p2                          0/1     Init:0/2   0          35s
coredns-86c58d9df4-jw7wk                   0/1     Pending    0          99s
coredns-86c58d9df4-sztxw                   0/1     Pending    0          99s
etcd-cherokee                              1/1     Running    0          36s
kube-apiserver-cherokee                    1/1     Running    0          46s
kube-controller-manager-cherokee           1/1     Running    0          59s
kube-proxy-22xwj                           1/1     Running    0          99s
kube-scheduler-cherokee                    1/1     Running    0          44s

May this be a bug or there is something missing?

1

1 Answers

2
votes

Try removing the taint on the master node, kubectl taint nodes --all node-role.kubernetes.io/master-.

Reference here, https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#control-plane-node-isolation