2
votes

I am trying to setup kubeadm v1.13.1 using the link in on premisis

https://kubernetes.io/docs/setup/independent/high-availability/

After setting up master I got the join command and tried to execute in the Second Master as explained.

sudo kubeadm join 10.240.0.16:6443 --token ih3zt7.iuhej18qzma0zigm --discovery-token-ca-cert-hash sha256:6d509781604e2b93c326318e9aa9d982a9bccbf3f8fb8feb1cf25afc1bbb53c0 --experimental-control-plane

[preflight] Running pre-flight checks

[discovery] Trying to connect to API Server "10.240.0.16:6443"

[discovery] Created cluster-info discovery client, requesting info from "https://10.240.0.16:6443"

[discovery] Requesting info from "https://10.240.0.16:6443" again to validate TLS against the pinned public key

[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.240.0.16:6443"

[discovery] Successfully established connection with API Server "10.240.0.16:6443"

[join] Reading configuration from the cluster...

[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'

[join] Running pre-flight checks before initializing the new control plane instance [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly

[certs] Generating "apiserver" certificate and key

[certs] apiserver serving cert is signed for DNS names [kb8-master2 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.240.0.33 10.240.0.16 10.240.0.16]

[certs] Generating "apiserver-kubelet-client" certificate and key

[certs] Generating "front-proxy-client" certificate and key

[certs] Generating "etcd/peer" certificate and key

[certs] etcd/peer serving cert is signed for DNS names [kb8-master2 localhost kb8-master1] and IPs [10.240.0.33 127.0.0.1 ::1 10.240.0.4]

[certs] Generating "etcd/healthcheck-client" certificate and key

[certs] Generating "apiserver-etcd-client" certificate and key

[certs] Generating "etcd/server" certificate and key

[certs] etcd/server serving cert is signed for DNS names [kb8-master2 localhost kb8-master1] and IPs [10.240.0.33 127.0.0.1 ::1 10.240.0.4]

[certs] valid certificates and keys now exist in "/etc/kubernetes/pki"

[certs] Using the existing "sa" key [kubeconfig] Using existing up-to-date kubeconfig file: "/etc/kubernetes/admin.conf"

[kubeconfig] Writing "controller-manager.conf" kubeconfig file

[kubeconfig] Writing "scheduler.conf" kubeconfig file [etcd] Checking Etcd cluster health

[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace

[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

[kubelet-start] Activating the kubelet service

[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...

[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kb8-master2" as an annotation

[etcd] Announced new etcd member joining to the existing etcd cluster

[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"

[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet-check] Initial timeout of 40s passed.

error uploading configuration: Get https://10.240.0.16:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config: unexpected EOF

10.240.0.16 is the LOAD BALANCER IP and what could the issue on this. Also I have applied weave net plugins to master1

Also I noticed the Master docker api-server was exit.

In master node I noticed following

sudo docker ps -a | grep kube-apiserver

7629b25ba441 40a63db91ef8 "kube-apiserver --au…" 2 minutes ago Exited (255) About a minute ago

sudo docker logs 7629b25ba441

Flag --insecure-port has been deprecated, This flag will be removed in a future version.

I1222 06:53:51.795759 1 server.go:557] external host was not specified, using 10.240.0.4

I1222 06:53:51.796033 1 server.go:146] Version: v1.13.1

I1222 06:53:52.251060 1 plugins.go:158] Loaded 8 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.

I1222 06:53:52.251161 1 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota. I1222 06:53:52.253467 1 plugins.go:158] Loaded 8 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.

I1222 06:53:52.253491 1 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.

F1222 06:54:12.257490 1 storage_decorator.go:57] Unable to create storage backend: config (&{ /registry [https://10.240.0.4:2379] /etc/kubernetes/pki/apiserver-etcd-client.key /etc/kubernetes/pki/apiserver-etcd-client.crt /etc/kubernetes/pki/etcd/ca.crt true 0xc0006e19e0 5m0s 1m0s}), err (dial tcp 10.240.0.4:2379: connect: connection refused)

1
Can you try to access API object on the main control plane node from that node you wish to join, for example: curl -k https://10.240.0.16:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?Nick_Kh
This was working fine before trying to join the master.Muthulingam
curl call was working perfectly before joiningMuthulingam
I have modified the question where I have added about kube-apiserver docker logs please check that. struggling to find what I was missingMuthulingam
you are right. Thanks finally I found the issue with I was using 1.12.2 version of kubeadm.config in my playbook. changed the file content to exectly to the v1.1.3 file content version and solved the issue.Muthulingam

1 Answers

0
votes

As @Muthulingam mentioned in the comments, the issue has been solved by replacing master node configuration via kubeadm config file with appropriate cluster version in kubernetesVersion flag.