1
votes

I'm setting up a 2 node cluster in kubernetes. 1 master node and 1 slave node. After setting up master node I did installation steps of docker, kubeadm,kubelet, kubectl on worker node and then ran the join command. On master node, I see 2 nodes in Ready state (master and worker) but when I try to run any kubectl command on worker node, I'm getting connection refused error as below. I do not see any admin.conf and nothing set in .kube/config . Are these files also needed to be on worker node? and if so how do I get it? How to resolve below error? Appreciate your help

root@kubework:/etc/kubernetes# kubectl get nodes The connection to the server localhost:8080 was refused - did you specify the right host or port?

root@kubework:/etc/kubernetes# kubectl cluster-info

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. The connection to the server localhost:8080 was refused - did you specify the right host or port? root@kubework:/etc/kubernetes#

4

4 Answers

2
votes

root@kubework:/etc/kubernetes# kubectl get nodes The connection to the server localhost:8080 was refused - did you specify the right host or port?

Kubcetl is by default configured and working on the master. It requires a kube-apiserver pod and ~/.kube/config.

For worker nodes, we don’t need to use kube-apiserver but what we want is using the master configuration to pass by it. To achieve it we have to copy the ~/.kube/config file from the master to the ~/.kube/config on the worker. Value ~ with the user executing kubcetl on the worker and master (that may be different of course).
Once that done you could use the kubectl command from the worker node exactly as you do that from the master node.

0
votes

Yes these files needed. Move these files into respective .kube/config folder on worker nodes.

0
votes

This is expected behavior even using kubectl on master node as non root account, by default this config file is stored for root account in /etc/kubernetes/admin.conf:

To make kubectl work for your non-root user, run these commands, which are also part of the kubeadm init output:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • Alternatively on the master, if you are the root user, you can run:
    export KUBECONFIG=/etc/kubernetes/admin.conf

Optionally Controlling your cluster from machines other than the control-plane node

scp root@<control-plane-host>:/etc/kubernetes/admin.conf .
kubectl --kubeconfig ./admin.conf get nodes

Note:

The KUBECONFIG environment variable holds a list of kubeconfig files. For Linux and Mac, the list is colon-delimited. For Windows, the list is semicolon-delimited. The KUBECONFIG environment variable is not required. If the KUBECONFIG environment variable doesn't exist, kubectl uses the default kubeconfig file, $HOME/.kube/config.

0
votes

I tried many of the solutions which just copy the /etc/kubernetes/admin.conf to ~/.kube/config. But none worked for me.

My OS is ubuntu and is resolved by removing, purging and re-installing the following :

  1. sudo dpkg -r kubeadm kubectl
  2. sudo dpkg -P kubeadm kubectl
  3. sudo apt-get install -y kubelet kubeadm kubectl
  4. sudo apt-mark hold kubelet kubeadm kubectl
  5. curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" (downloading kubectl again, this actually worked)
  6. kubectl get nodes

NAME STATUS ROLES AGE VERSION

mymaster Ready control-plane,master 31h v1.20.4

myworker Ready 31h v1.20.4