8
votes

I recently installed kubernetes on VMware and also configured few pods , while configuring those pods , it automatically used IP of the VMware and configured. I was able to access the application during that time but then recently i rebooted VM and machine which hosts the VM, during this - IP of the VM got changed i guess and now - I am getting below error when using command :

kubectl get pod -n

userX@ubuntu:~$ kubectl get pod -n NameSpaceX
Unable to connect to the server: dial tcp 192.168.214.136:6443: connect: no route to host```

userX@ubuntu:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Unable to connect to the server: dial tcp 192.168.214.136:6443: connect: no route to host

kubectl cluster-info as well as other related commands gives same output. in VMware workstation settings, we are using network adapter which is sharing host IP address setting. We are not sure if it has any impact.

We also tried to add below entry in /etc/hosts , it is not working.

127.0.0.1 localhost \n 192.168.214.136 localhost \n 127.0.1.1 ubuntu

I expect to run the pods back again to access the application.Instead of reinstalling all pods which is time consuming - we are looking for quick workaround so that pods will get back to running state.

5

5 Answers

17
votes

if you use minikube sometimes everythin you gotta do is just restart minikube

run: minikube start

2
votes

The common practice is to copy config file to the home directory

sudo cp /etc/kubernetes/admin.conf ~/.kube/config && sudo chown $(id -u):$(id -g) $HOME/.kube/config

Also, make sure that api-server address is valid.

server: https://<master-node-ip>:6443

If not, you can manually edit it using any text editor.

1
votes

You need to export the admin.conf file as kubeconfig before running the kubectl commands. You may put this as your env variable

export kubeconfig=<path>/admin.conf

after this you should be able to run the kubectl command. I am hoping that your setup of K8S cluster is proper.

1
votes

I encountered the same issue - the problem was that the master node didn't expose port 6443 outside.

Below are the steps I took to fix it.

1 ) Check IP of api-server.
This can be verified via the .kube/config file (under server field) or with:
kubectl describe pod/kube-apiserver-<master-node-name> -n kube-system.

2 ) Run curl https://<kube-apiserver-IP>:6443 and see if port 6443 is open.

3 ) If port 6443 you should get something related to the certificate like:

curl: (60) SSL certificate problem: unable to get local issuer certificate

4 ) If port 6443 is not open:
4.A ) SSH into master node.
4.B ) Run sudo firewall-cmd --add-port=6443/tcp --permanent (I'm assuming firewalld is installed).

4.C ) Run sudo firewall-cmd --reload.

4.D ) Run sudo firewall-cmd --list-all and you should see port 6443 is updated:

public
  target: default
  icmp-block-inversion: no
  interfaces: 
  sources: 
  services: dhcpv6-client ssh
  ports: 6443/tcp <---- Here
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules:
0
votes

Last night I had the exact same error installing Kubernetes using this puppet module: https://forge.puppet.com/puppetlabs/kubernetes

Turns out that it is an incorrect iptables setting in the master that blocks all non-local requests towards the api.

The way I solved it (bruteforce solution) is by

  1. completely remove alle installed k8s related software (also all config files, etcd data, docker images, mounted tmpfs filesystems, ...)
  2. wipe the iptables completely https://serverfault.com/questions/200635/best-way-to-clear-all-iptables-rules
  3. reinstall

This is what solved the problem in my case.

There is probably a much nicer and cleaner way to do this (i.e. simply change the iptables rules to allow access).