0
votes

I am following the document https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/ to try to create a kubernetes cluster with 3 vagrant ubuntu vm in my local mac. But I can only see the master by running "kubectl get nodes" in master node after "kubeadm join" successfully. After tried several possible ways googled from internet, still the same issue.

Here listed some information about my cluster:

3 vagrant virtual machines (ubuntu 16.04): - (master) eth0: 10.0.2.15, eth1: 192.168.101.101 --> kubeadm init --ignore-preflight-errors Swap --apiserver-advertise-address=192.168.101.101 - (worker1) eth0: 10.0.2.15, eth1: 192.168.101.102 --> kubeadm join 192.168.101.101:6443 --token * --discovery-token-ca-cert-hash sha256: --ignore-preflight-errors Swap - (worker2) eth0: 10.0.2.15, eth1: 192.168.101.103 --> kubeadm join 192.168.101.101:6443 --token --discovery-token-ca-cert-hash sha256:* --ignore-preflight-errors Swap

Any ideas on this?

Regards, Jacky

screenshot for kubelet log

screenshot for kubelet log

log-new-part1 log-new-part2

2
Can you post status and logs of kubelet service on worker node?Const
Output for "systemctl status kubelet": kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled) Drop-In: /etc/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: active (running) since Wed 2018-06-20 06:10:10 UTC; 2h 29min ago Docs: kubernetes.io/docs Main PID: 4721 (kubelet) Tasks: 12 Memory: 39.6M CPU: 2min 25.780s CGroup: /system.slice/kubelet.serviceJacky
Output for "journalctl -xeu kubelet": <br/> pod_container_deletor.go:77] Container "ea4ba30fd23bf91cdce59a3e5402317bfeff1474600b8b3a68c06af2f3289f1c" not found in pod's containers....] <br/> 4721 kubelet_node_status.go:377] Error updating node status, will retry: failed to patch status "{\"status\":{\"$setElementOrder/addresses\":[{\"type\":\"In Jun 20 06:48:38 default-ubuntu-1604 kubelet[4721]: 9356b8610\",\"k8s.gcr.io/pause-amd64:3.1\"],\"sizeBytes\":742472}],...Jacky

2 Answers

0
votes

your problem with default route on the salve node fix the routing table.

I use script like this to fix the routes after OS starup.

#!/bin/bash

if $( ip route |grep  -q '^default via 10.0.2.2 dev' ); then
        ip route delete default via  10.0.2.2
fi

if ! $( ip r |egrep -q '^default .* eth1'); then
        ip route add default via 192.168.15.1
fi
exit 0
0
votes

Be sure that any node [masters and workers] have unique hostname. After few hours just realize that my master and cloned VM's from master have same hostname master, after change my worker nodes hostname into worker-node-01 and worker-node-02 all works perfect.