82
votes

I have followed the helloword tutorial on http://kubernetes.io/docs/hellonode/.

When I run:

kubectl run hello-node --image=gcr.io/PROJECT_ID/hello-node:v1 --port=8080

I get:

The connection to the server localhost:8080 was refused - did you specify the right host or port?

Why does the command line try to connect to the localhost?

23

23 Answers

92
votes

The issue is that your kubeconfig is not right. To auto-generate it run:

gcloud container clusters get-credentials "CLUSTER NAME"

This worked for me.

26
votes

Make sure your config is set to the project -

gcloud config set project [PROJECT_ID]
  1. Run a checklist of the Clusters in the account:
gcloud container clusters list
  1. Check the output :
NAME           LOCATION       MASTER_VERSION  MASTER_IP      MACHINE_TYPE  NODE_VE.      NUM_NODES  STATUS
alpha-cluster  asia-south1-a  1.9.7-gke.6     35.200.254.78  f1-micro      1.9.7-gke.6   3          RUNNING
  1. Run the following cmd to fetch credentials for your running cluster:
gcloud container clusters get-credentials your-cluster-name --zone your-zone       --project your-project
  1. The following output follows:
Fetching cluster endpoint and auth data.
kubeconfig entry generated for alpha-cluster.
  1. Try checking details of the node running kubectl such as below to list all pods in the current namespace, with more details:
kubectl get nodes -o wide

Should be good to go.

9
votes

After running "kubeinit" command, kubernetes asks you to run following as regular user

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

But if you run this as a regular user, you will get "The connection to the server localhost:8080 was refused - did you specify the right host or port?" when trying to access as a root user and vice versa. So try accessing "kubectl" as the user who executed the above commands.

5
votes

Reproduce the same error when doing a tutorial from Udacity called Scalable Microservices with Kubernetes https://classroom.udacity.com/courses/ud615, at the point of Using Kubernetes, Part 3 of Lesson.

Launch a Single Instance:

kubectl run nginx --image=nginx:1.10.0

Error:

Unable to connect to the server: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it.

How I resolved the Error:

Login to Google Cloud Platform

Navigate to Container Engine Google Cloud Platform, Container Engine

Click CONNECT on Cluster

Use login Credentials to access Cluster [NAME] in your Teminal

Proceeded With Work!!!

4
votes

I had same error, this worked for me. Run

minikube status

if the response is

type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

run minikube start

type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

You can proceed

3
votes

This errors means that kubectl is attempting to connect to a Kubernetes apiserver running on your local machine, which is the default if you haven't configured it to talk to a remote apiserver.

3
votes

Reinitialising gcloud with proper account and project worked for me.

gcloud init

After this retrying the below command was successful and kubeconfig entry was generated.

gcloud container clusters get-credentials "cluster_name"

check the cluster info with

kubectl cluster-info
2
votes

I had this problem using a local docker. The thing to do is check the logs of the containers its spins up to figure out what went wrong. For me it transpired that etcd had fallen over

   $ docker logs <etcdContainerId>
   <snip>
   2016-06-15 09:02:32.868569 C | etcdmain: listen tcp 127.0.0.1:7001: bind: address already in use

Aha! I'd been playing with Cassandra in a docker container and I'd forwarded all the ports since I wasn't sure which it needed exposed and 7001 is one of its ports. Stopping Cassandra, cleaning up the mess and restarting it fixed things.

2
votes

If you created a cluster on AWS using kops, then kops creates ~/.kube/config for you, which is nice. But if someone else needs to connect to that cluster, then they also need to install kops so that it can create the kubeconfig for you:

export AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id)
export AWS_SECRET_ACCESS_KEY=$(aws configure get aws_secret_access_key)
export CLUSTER_ALIAS=kubernetes-cluster

kubectl config set-context ${CLUSTER_ALIAS} \
    --cluster=${CLUSTER_FULL_NAME} \
    --user=${CLUSTER_FULL_NAME}

kubectl config use-context ${CLUSTER_ALIAS}

kops export cluster --name ${CLUSTER_FULL_NAME} \
  --region=${CLUSTER_REGION} \
  --state=${KOPS_STATE_STORE}
2
votes

try run with sudo permission mode
example sudo kubectl....

2
votes

Regardless of your environment (gcloud or not ) , you need to point your kubectl to kubeconfig. By default, kubectl expects the path as $HOME/.kube/config or point your custom path as env variable (for scripting etc ) export KUBECONFIG=/your_kubeconfig_path

Please refer :: https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/

If you don't have a kubeconfig file for your cluster, create one by referring :: https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/

It is required to find cluster's ca.crt , apiserver-kubelet-client key and cert.

2
votes

I got the same trouble since nearly release, seem must use KUBECONFIG explicit

sudo cp /etc/kubernetes/admin.conf $HOME/

sudo chown $(id -u):$(id -g) $HOME/admin.conf

export KUBECONFIG=$HOME/admin.conf

2
votes

I was getting an error when running

sudo kubectl get pods
The connection to the server localhost:8080 was refused - did you specify the right host or port?

Finally for my environment this command parameter works

sudo kubectl --kubeconfig /etc/kubernetes/admin.conf get pods

when executing kubectl as non root.

2
votes

I was trying to connect with local-host and end up with same problem, then I found, I need to start a proxy to the Kubernetes API server.

kubectl proxy --port=8080

https://kubernetes.io/docs/tasks/extend-kubernetes/http-proxy-access-api/

1
votes

I had the same issue after a reboot, I followed the guide described here

So try the following:

$ sudo -i
# swapoff -a
# exit
$ strace -eopenat kubectl version

After that it works fine.

1
votes

as an improvement to Omokahfe:

minikube status

if the response is

E0623 09:12:24.603405   21127 status.go:396] kubeconfig endpoint: extract IP: "minikube" does not appear in /home/<user>/.kube/config
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Misconfigured
timeToStop: Nonexistent


WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `**minikube update-context**`

run

minikube update-context

then it will show

* "minikube" context has been updated to point to 10.254.183.66:8443
* Current context is "minikube"

and then

minikube status

will show

type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
timeToStop: Nonexistent
1
votes

Happening because your kubectl is not able to connect to kubernetes server. Run your cluster.

minikube start

If you want to access service w.r.t your kube config file, you can access it via

 kubectl --kubeconfig ~/.kube/config  get jobs

~/.kube/config : Path of config file, modify w.r.t your file path

0
votes

I was also getting same below error:

Unable to connect to the server: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it.

Then I just execute below command and found everything working fine.

PS C:> .\minikube.exe start

Starting local Kubernetes v1.10.0 cluster... Starting VM... Downloading Minikube ISO 150.53 MB / 150.53 MB [============================================] 100.00% 0s Getting VM IP address... Moving files into cluster... Downloading kubeadm v1.10.0 Downloading kubelet v1.10.0 Finished Downloading kubelet v1.10.0 Finished Downloading kubeadm v1.10.0 Setting up certs... Connecting to cluster... Setting up kubeconfig... Starting cluster components... Kubectl is now configured to use the cluster. Loading cached images from config file. PS C:> .\minikube.exe start Starting local Kubernetes v1.10.0 cluster... Starting VM... Getting VM IP address... Moving files into cluster... Setting up certs... Connecting to cluster... Setting up kubeconfig... Starting cluster components... Kubectl is now configured to use the cluster.

0
votes

I got this issue when using " Bash on Windows " with azure kubernetes

az aks get-credentials -n <myCluster>-g <myResourceGroup>

The config file is autogenerated and placed in '~/.kube/config' file as per OS (which is windows in my case)

To solve this - Run from Bash commandline cp <yourWindowsPathToConfigPrintedFromAbobeCommand> ~/.kube/config

0
votes

The correct answer, from all above, is to run the commands below:

sudo cp /etc/kubernetes/admin.conf $HOME/

sudo chown $(id -u):$(id -g) $HOME/admin.conf

export KUBECONFIG=$HOME/admin.conf
0
votes

In case someone, like myself, came across this thread because of the underlying error in their Cloud Build step while switching from gcr.io/cloud-builders/kubectl to gcr.io/google.com/cloudsdktool/cloud-sdk, you would need to explicitly call get-credentials for kubectl to work. My pipeline:

steps:
  - name: gcr.io/google.com/cloudsdktool/cloud-sdk
    entrypoint: 'sh'
    args:
      - '-c'
      - |
        gcloud container clusters get-credentials --zone "$$CLOUDSDK_COMPUTE_ZONE" "$$CLOUDSDK_CONTAINER_CLUSTER"
        kubectl call-what-you-need-here
options:
  env:
    - 'CLOUDSDK_COMPUTE_ZONE=europe-west3-a'
    - 'CLOUDSDK_CONTAINER_CLUSTER=my-cluster'
-3
votes

Solution is this:

minikube delete
minikube start --vm-driver none