0
votes

I am running kubernetes cluster with multi master (3 master nodes) with HA Proxy and also I am Using external etcd in this projects for ssl generate I'm using cfssl (cloudflare)

I Create etcd service in each master node

[Unit]
Description=etcd
Documentation=https://github.com/coreos


[Service]
ExecStart=/usr/local/bin/etcd \
  --name 192.168.1.21 \
  --cert-file=/etc/etcd/kubernetes.pem \
  --key-file=/etc/etcd/kubernetes-key.pem \
  --peer-cert-file=/etc/etcd/kubernetes.pem \
  --peer-key-file=/etc/etcd/kubernetes-key.pem \
  --trusted-ca-file=/etc/etcd/ca.pem \
  --peer-trusted-ca-file=/etc/etcd/ca.pem \
  --peer-client-cert-auth \
  --client-cert-auth \
  --initial-advertise-peer-urls https://192.168.1.21:2380 \
  --listen-peer-urls https://192.168.1.21:2380 \
  --listen-client-urls https://192.168.1.21:2379,http://127.0.0.1:2379 \
  --advertise-client-urls https://192.168.1.21:2379 \
  --initial-cluster-token etcd-cluster-0 \
  --initial-cluster 192.168.1.21=https://192.168.1.21:2380,192.168.1.22=https://192.168.1.22:2380,192.168.1.23=https://192.168.1.23:2380 \
  --initial-cluster-state new \
  --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5


[Install]
WantedBy=multi-user.target

and run kubeadm init with config file

kubeadm init --config config.yaml

apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: stable
controlPlaneEndpoint: "192.168.1.20:6443"
etcd:
    external:
        endpoints:
        - https://192.168.1.21:2379
        - https://192.168.1.22:2379
        - https://192.168.1.23:2379
        caFile: /etc/etcd/ca.pem
        certFile: /etc/etcd/kubernetes.pem
        keyFile: /etc/etcd/kubernetes-key.pem

after that my cluster are ready

kubectl get nodes -o wide

NAME      STATUS     ROLES    AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
master1   Ready   master   25h   v1.17.2   192.168.1.21   <none>        Ubuntu 16.04.6 LTS   4.4.0-173-generic   docker://19.3.5
master2   Ready   master   25h   v1.17.2   192.168.1.22   <none>        Ubuntu 16.04.6 LTS   4.4.0-142-generic   docker://19.3.5
master3   Ready   master   25h   v1.17.2   192.168.1.23   <none>        Ubuntu 16.04.6 LTS   4.4.0-142-generic   docker://19.3.5
worker1   Ready   worker   25h   v1.17.2   192.168.1.27   <none>        Ubuntu 16.04.6 LTS   4.4.0-142-generic   docker://19.3.5
worker2   Ready   worker   25h   v1.17.2   192.168.1.28   <none>        Ubuntu 16.04.6 LTS   4.4.0-142-generic   docker://19.3.5
worker3   Ready   worker   25h   v1.17.2   192.168.1.29   <none>        Ubuntu 16.04.6 LTS   4.4.0-142-generic   docker://19.3.5

after that I'm trying to apply flannel with command

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

now I want to see my problem and help me

NAMESPACE     NAME                              READY   STATUS              RESTARTS   AGE
kube-system   coredns-6955765f44-246cj          0/1     ContainerCreating   0          51m
kube-system   coredns-6955765f44-xrwh4          0/1     ContainerCreating   0          24h
kube-system   coredns-7f85fdfc6b-t7jdr          0/1     ContainerCreating   0          48m
kube-system   kube-apiserver-master1            1/1     Running             0          25h
kube-system   kube-apiserver-master2            1/1     Running             1          25h
kube-system   kube-apiserver-master3            1/1     Running             0          25h
kube-system   kube-controller-manager-master1   1/1     Running             0          56m
kube-system   kube-controller-manager-master2   1/1     Running             0          25h
kube-system   kube-controller-manager-master3   1/1     Running             0          25h
kube-system   kube-flannel-ds-amd64-6j6lb       0/1     Error               285        25h
kube-system   kube-flannel-ds-amd64-fdbxg       0/1     CrashLoopBackOff    14         25h
kube-system   kube-flannel-ds-amd64-mjfjf       0/1     CrashLoopBackOff     286        25h
kube-system   kube-flannel-ds-amd64-r46fk       0/1     CrashLoopBackOff    285        25h
kube-system   kube-flannel-ds-amd64-t8tfg       0/1     CrashLoopBackOff    284        25h
kube-system   kube-proxy-6h6k9                  1/1     Running             0          25h
kube-system   kube-proxy-cjgmv                  1/1     Running             0          25h
kube-system   kube-proxy-hblk8                  1/1     Running             0          25h
kube-system   kube-proxy-wdvc9                  1/1     Running             0          25h
kube-system   kube-proxy-z48zn                  1/1     Running             0          25h
kube-system   kube-scheduler-master1            1/1     Running             0          25h
kube-system   kube-scheduler-master2            1/1     Running             0          25h
kube-system   kube-scheduler-master3            1/1     Running             0          25h

2

2 Answers

1
votes

I understood the mistake I Should add network rang in my config.yaml

apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: stable
apiServerCertSANs:
- 192.168.1.20
controlPlaneEndpoint: "192.168.1.20:6443"
etcd:
    external:
        endpoints:
        - https://192.168.1.21:2379
        - https://192.168.1.22:2379
        - https://192.168.1.23:2379
        caFile: /etc/etcd/ca.pem
        certFile: /etc/etcd/kubernetes.pem
        keyFile: /etc/etcd/kubernetes-key.pem
networking:
  podSubnet: 10.244.0.0/16
apiServerExtraArgs:
  apiserver-count: "3"
0
votes

For flannel to work correctly, you must pass --pod-network-cidr=10.244.0.0/16 to kubeadm init.