I recently installed kubernetes on VMware and also configured few pods , while configuring those pods , it automatically used IP of the VMware and configured. I was able to access the application during that time but then recently i rebooted VM and machine which hosts the VM, during this - IP of the VM got changed i guess and now - I am getting below error when using command :
kubectl get pod -n
userX@ubuntu:~$ kubectl get pod -n NameSpaceX
Unable to connect to the server: dial tcp 192.168.214.136:6443: connect: no route to host```
userX@ubuntu:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Unable to connect to the server: dial tcp 192.168.214.136:6443: connect: no route to host
kubectl cluster-info as well as other related commands gives same output. in VMware workstation settings, we are using network adapter which is sharing host IP address setting. We are not sure if it has any impact.
We also tried to add below entry in /etc/hosts , it is not working.
127.0.0.1 localhost \n 192.168.214.136 localhost \n 127.0.1.1 ubuntu
I expect to run the pods back again to access the application.Instead of reinstalling all pods which is time consuming - we are looking for quick workaround so that pods will get back to running state.