2
votes

The problem i'm running into is very similar to the other existing post, except they all have the same solution therefore im creating a new thread.

The Problem: The Master node is still in "NotReady" status after installing Flannel.

Expected result: Master Node becomes "Ready" after installing Flannel.

Background: I am following this guide when installing Flannel

My concern is that I am using Kubelet v1.17.2 by default that just came out like last month (Can anyone confirm if v1.17.2 works with Flannel?"

Here is the output after running the command on master node: kubectl describe node machias

Name:               machias
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=machias
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/master=
Annotations:        flannel.alpha.coreos.com/backend-data: {"VtepMAC":"be:78:65:7f:ae:6d"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 192.168.122.172
                    kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Sat, 15 Feb 2020 01:00:01 -0500
Taints:             node.kubernetes.io/not-ready:NoExecute
                    node-role.kubernetes.io/master:NoSchedule
                    node.kubernetes.io/not-ready:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  machias
  AcquireTime:     <unset>
  RenewTime:       Sat, 15 Feb 2020 13:54:56 -0500
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Sat, 15 Feb 2020 13:54:52 -0500   Sat, 15 Feb 2020 00:59:54 -0500   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Sat, 15 Feb 2020 13:54:52 -0500   Sat, 15 Feb 2020 00:59:54 -0500   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Sat, 15 Feb 2020 13:54:52 -0500   Sat, 15 Feb 2020 00:59:54 -0500   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Sat, 15 Feb 2020 13:54:52 -0500   Sat, 15 Feb 2020 00:59:54 -0500   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
  InternalIP:  192.168.122.172
  Hostname:    machias
Capacity:
  cpu:                2
  ephemeral-storage:  38583284Ki
  hugepages-2Mi:      0
  memory:             4030364Ki
  pods:               110
Allocatable:
  cpu:                2
  ephemeral-storage:  35558354476
  hugepages-2Mi:      0
  memory:             3927964Ki
  pods:               110
System Info:
  Machine ID:                 20cbe0d737dd43588f4a2bccd70681a2
  System UUID:                ee9bc138-edee-471a-8ecc-f1c567c5f796
  Boot ID:                    0ba49907-ec32-4e80-bc4c-182fccb0b025
  Kernel Version:             5.3.5-200.fc30.x86_64
  OS Image:                   Fedora 30 (Workstation Edition)
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://19.3.5
  Kubelet Version:            v1.17.2
  Kube-Proxy Version:         v1.17.2
PodCIDR:                      10.244.0.0/24
PodCIDRs:                     10.244.0.0/24
Non-terminated Pods:          (6 in total)
  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
  kube-system                 etcd-machias                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12h
  kube-system                 kube-apiserver-machias            250m (12%)    0 (0%)      0 (0%)           0 (0%)         12h
  kube-system                 kube-controller-manager-machias    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12h
  kube-system                 kube-flannel-ds-amd64-rrfht                   100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      12h
  kube-system                 kube-proxy-z2q7d                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12h
  kube-system                 kube-scheduler-machias            100m (5%)     0 (0%)      0 (0%)           0 (0%)         12h
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                650m (32%)  100m (5%)
  memory             50Mi (1%)   50Mi (1%)
  ephemeral-storage  0 (0%)      0 (0%)
Events:              <none>

And the following command: kubectl get pods --all-namespaces

NAMESPACE     NAME                                         READY   STATUS    RESTARTS   AGE
kube-system   coredns-6955765f44-7nz46                     0/1     Pending   0          12h
kube-system   coredns-6955765f44-xk5r2                     0/1     Pending   0          13h
kube-system   etcd-machias.cs.unh.edu                      1/1     Running   0          13h
kube-system   kube-apiserver-machias.cs.unh.edu            1/1     Running   0          13h
kube-system   kube-controller-manager-machias.cs.unh.edu   1/1     Running   0          13h
kube-system   kube-flannel-ds-amd64-rrfht                  1/1     Running   0          12h
kube-system   kube-flannel-ds-amd64-t7p2p                  1/1     Running   0          12h
kube-system   kube-proxy-fnn78                             1/1     Running   0          12h
kube-system   kube-proxy-z2q7d                             1/1     Running   0          13h
kube-system   kube-scheduler-machias.cs.unh.edu            1/1     Running   0          13h

Thank you for your help!

2
Which cloud provider you are using?Shree Prakash
@ShreePrakash I am not using a cloud provider, I believe the virtual server is running from my schools networkHexalogy

2 Answers

0
votes

I've reproduced your scenario using the same versions you are using to make sure these versions work with Flannel.

After testing it I can affirm that there is no problem with the version you are using.

I created it following these steps:

Ensure iptables tooling does not use the nftables backend Source

update-alternatives --set iptables /usr/sbin/iptables-legacy

Installing runtime

sudo yum remove docker   docker-common  docker-selinux   docker-engine
sudo yum install -y yum-utils   device-mapper-persistent-data   lvm2
sudo yum-config-manager   --add-repo    https://download.docker.com/linux/centos/docker-ce.repo
sudo yum install docker-ce-19.03.5-3.el7
sudo systemctl start docker

Installing kubeadm, kubelet and kubectl

sudo su -c "cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF"

sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
sudo yum install -y kubelet-1.17.2-0 kubeadm-1.17.2-0 kubectl-1.17.2-0 --disableexcludes=kubernetes
sudo systemctl enable --now kubelet

Note:

  • Setting SELinux in permissive mode by running setenforce 0 and sed ... effectively disables it. This is required to allow containers to access the host filesystem, which is needed by pod networks for example. You have to do this until SELinux support is improved in the kubelet.
  • Some users on RHEL/CentOS 7 have reported issues with traffic being routed incorrectly due to iptables being bypassed. You should ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl config, e.g.

    cat <<EOF > /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    EOF
    sysctl --system
    
  • Make sure that the br_netfilter module is loaded before this step. This can be done by running lsmod | grep br_netfilter. To load it explicitly call modprobe br_netfilter.

Initialize cluster with Flannel CIDR

sudo kubeadm init --pod-network-cidr=10.244.0.0/16
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Add Flannel CNI

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml

By default, your cluster will not schedule Pods on the control-plane node for security reasons. If you want to be able to schedule Pods on the control-plane node, e.g. for a single-machine Kubernetes cluster for development, run:

kubectl taint nodes --all node-role.kubernetes.io/master-

As can be seen, my master node is Ready. Please, follow this How-to and let me know if you can achieve your desired state.

$ kubectl describe nodes
Name:               kubeadm-fedora
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=kubeadm-fedora
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/master=
Annotations:        flannel.alpha.coreos.com/backend-data: {"VtepMAC":"8e:7e:bf:d9:21:1e"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 10.128.15.200
                    kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Mon, 17 Feb 2020 11:31:59 +0000
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  kubeadm-fedora
  AcquireTime:     <unset>
  RenewTime:       Mon, 17 Feb 2020 11:47:52 +0000
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Mon, 17 Feb 2020 11:47:37 +0000   Mon, 17 Feb 2020 11:31:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Mon, 17 Feb 2020 11:47:37 +0000   Mon, 17 Feb 2020 11:31:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Mon, 17 Feb 2020 11:47:37 +0000   Mon, 17 Feb 2020 11:31:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Mon, 17 Feb 2020 11:47:37 +0000   Mon, 17 Feb 2020 11:32:32 +0000   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  10.128.15.200
  Hostname:    kubeadm-fedora
Capacity:
  cpu:                2
  ephemeral-storage:  104844988Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             7493036Ki
  pods:               110
Allocatable:
  cpu:                2
  ephemeral-storage:  96625140781
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             7390636Ki
  pods:               110
System Info:
  Machine ID:                 41689852cca44b659f007bb418a6fa9f
  System UUID:                390D88CD-3D28-5657-8D0C-83AB1974C88A
  Boot ID:                    bff1c808-788e-48b8-a789-4fee4e800554
  Kernel Version:             3.10.0-1062.9.1.el7.x86_64
  OS Image:                   CentOS Linux 7 (Core)
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://19.3.5
  Kubelet Version:            v1.17.2
  Kube-Proxy Version:         v1.17.2
PodCIDR:                      10.244.0.0/24
PodCIDRs:                     10.244.0.0/24
Non-terminated Pods:          (8 in total)
  Namespace                   Name                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                      ------------  ----------  ---------------  -------------  ---
  kube-system                 coredns-6955765f44-d9fb4                  100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     15m
  kube-system                 coredns-6955765f44-l7xrk                  100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     15m
  kube-system                 etcd-kubeadm-fedora                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
  kube-system                 kube-apiserver-kubeadm-fedora             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
  kube-system                 kube-controller-manager-kubeadm-fedora    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
  kube-system                 kube-flannel-ds-amd64-v6m2w               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      15m
  kube-system                 kube-proxy-d65kl                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
  kube-system                 kube-scheduler-kubeadm-fedora             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                850m (42%)  100m (5%)
  memory             190Mi (2%)  390Mi (5%)
  ephemeral-storage  0 (0%)      0 (0%)
Events:
  Type    Reason                   Age                From                        Message
  ----    ------                   ----               ----                        -------
  Normal  NodeHasSufficientMemory  16m (x6 over 16m)  kubelet, kubeadm-fedora     Node kubeadm-fedora status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    16m (x5 over 16m)  kubelet, kubeadm-fedora     Node kubeadm-fedora status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     16m (x5 over 16m)  kubelet, kubeadm-fedora     Node kubeadm-fedora status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  16m                kubelet, kubeadm-fedora     Updated Node Allocatable limit across pods
  Normal  Starting                 15m                kubelet, kubeadm-fedora     Starting kubelet.
  Normal  NodeHasSufficientMemory  15m                kubelet, kubeadm-fedora     Node kubeadm-fedora status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    15m                kubelet, kubeadm-fedora     Node kubeadm-fedora status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     15m                kubelet, kubeadm-fedora     Node kubeadm-fedora status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  15m                kubelet, kubeadm-fedora     Updated Node Allocatable limit across pods
  Normal  Starting                 15m                kube-proxy, kubeadm-fedora  Starting kube-proxy.
  Normal  NodeReady                15m                kubelet, kubeadm-fedora     Node kubeadm-fedora status is now: NodeReady
$ kubectl get nodes
NAME             STATUS   ROLES    AGE   VERSION
kubeadm-fedora   Ready    master   17m   v1.17.2
$ kubectl get pods -A
NAMESPACE     NAME                                     READY   STATUS    RESTARTS   AGE
kube-system   coredns-6955765f44-d9fb4                 1/1     Running   0          17m
kube-system   coredns-6955765f44-l7xrk                 1/1     Running   0          17m
kube-system   etcd-kubeadm-fedora                      1/1     Running   0          17m
kube-system   kube-apiserver-kubeadm-fedora            1/1     Running   0          17m
kube-system   kube-controller-manager-kubeadm-fedora   1/1     Running   0          17m
kube-system   kube-flannel-ds-amd64-v6m2w              1/1     Running   0          17m
kube-system   kube-proxy-d65kl                         1/1     Running   0          17m
kube-system   kube-scheduler-kubeadm-fedora            1/1     Running   0          17m
0
votes

PodCIDR value is showing as 10.244.0.0/24.For flannel to work correctly, you must pass --pod-network-cidr=10.244.0.0/16 to kubeadm init.