2
votes

i am trying to setup a kubernetes cluster for testing purpose with a master and one minion. When i run the kubectl get nodes it always says NotReady. Following the configuration on minion in /etc/kubernetes/kubelet

KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname-override=centos-minion"
KUBELET_API_SERVER="--api-servers=http://centos-master:8080"
KUBELET_ARGS=""

When kubelete service is started following logs could be seen

Mar 16 13:29:49 centos-minion kubelet: E0316 13:29:49.126595 53912 event.go:202] Unable to write event: 'Post http://centos-master:8080/api/v1/namespaces/default/events: dial tcp 10.143.219.12:8080: i/o timeout' (may retry after sleeping)

Mar 16 13:16:01 centos-minion kube-proxy: E0316 13:16:01.195731 53595 event.go:202] Unable to write event: 'Post http://localhost:8080/api/v1/namespaces/default/events: dial tcp [::1]:8080: getsockopt: connection refused' (may retry after sleeping)

Following is the config on master /etc/kubernetes/apiserver

KUBE_API_ADDRESS="--bind-address=0.0.0.0"
KUBE_API_PORT="--port=8080"
KUBELET_PORT="--kubelet-port=10250"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

/etc/kubernetes/config

KUBE_ETCD_SERVERS="--etcd-servers=http://centos-master:2379"
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://centos-master:8080"

On master following processes are properly running

kube 5657 1 0 Mar15 ? 00:12:05 /usr/bin/kube-apiserver --logtostderr=true --v=0 --etcd-servers=http://centos-master:2379 --address=0.0.0.0 --port=8080 --kubelet-port=10250 --allow-privileged=false --service-cluster-ip-range=10.254.0.0/16

kube 5690 1 1 Mar15 ? 00:16:01 /usr/bin/kube-controller-manager --logtostderr=true --v=0 --master=http://centos-master:8080

kube 5723 1 0 Mar15 ? 00:02:23 /usr/bin/kube-scheduler --logtostderr=true --v=0 --master=http://centos-master:8080

So i still do not know what is missing.

3
Are you able to telnet master on port 8080 ? And what is IP of master ? - user2486495
Yes. I am able to ping the master and its reachable. IP of the master is 10.x.x.12 and this is also configured in /etc/hosts - Prashant
Try "telnet 10.x.x.12 8080" from minion and share the output. - user2486495
Test the api-server from the master itself. It looks like it did not start (or not start correct). - Norbert van Nobelen
apiserver seems to be fine kube 7511 1 0 Mar16 ? 00:07:56 /usr/bin/kube-apiserver --logtostderr=true --v=0 --etcd-servers=127.0.0.1:2379 --insecure-bind-address=0.0.0.0 --insecure-port=8080 --kubelet-port=10250 --allow-privileged=false --service-cluster-ip-range=10.254.0.0/16 kube 7545 1 1 Mar16 ? 00:10:17 /usr/bin/kube-controller-manager --logtostderr=true --v=0 --master=centos-master:8080 kube 7579 1 0 Mar16 ? 00:01:35 /usr/bin/kube-scheduler --logtostderr=true --v=0 --master=centos-master:8080 - Prashant

3 Answers

0
votes

I was having the same issue when I setting up the kubernetes with fedora following the steps on kubernetes.io. In the tutorial, it's commenting out KUBELET_ARGS="--cgroup-driver=systemd" in node's /etc/kubernetes/kubelet, if you uncomment it, you will see the node status become ready. Hope this help

0
votes

rejoin the worker nodes to the master.

My install is on three physical machines. One master and two workers. All needed reboots.

you will need your join token, which you probably don't have:

sudo kubeadm token list

copy the TOKEN field data, the output looks like this (no, that's not my real one):

TOKEN ow3v08ddddgmgzfkdkdkd7 18h 2018-07-30T12:39:53-05:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token

THEN join the cluster here. Master node IP is the real IP address of your machine:

sudo kubeadm join --token <YOUR TOKEN HASH> <MASTER_NODE_IP>:6443 --discovery-token-unsafe-skip-ca-verification
0
votes

Have to restart kubelet service in node (systemctl enable kubelet & systemctl restart kubelet). Then you can see your node is in "Ready" status.