36
votes

Am working on Azure Kubernates where we can store Docker Images in Azure. Here am trying to check my kubectl version, then am getting

Unable to connect to the server: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it.

For this I followed MSDN:uilding Microservices with AKS and VSTS – Part 2 and MSDOCS:Kubernetes on windows

So, can you please suggest me “How to resolve for this issue?”

13
I am getting the same issue on kubectl cluster-infoParth Trivedi

13 Answers

31
votes

I think you might missed out to configure the cluster, for that you need to run the below command in your command prompt.

az aks get-credentials --resource-group myResourceGroup --name myAKSCluster

The above CLI command creates .config file with complete cluster and nodes details in your local machine.

After that you run kubectl get nodes command in your command prompt, then you can get the list of nodes inside the cluster like in the below image.

enter image description here For reference follow this Deploy an Azure Kubernetes Service (AKS) cluster.

13
votes

If you can see that your config file is correctly configured by going to $HOME/.kube/config - Linux or %UserProfile%/.kube/config - Windows but you are still receiving the error message - try running command line as an administrator.

More information on the config file can be found here: https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/

5
votes

For me it appeared to be due to Windows not having a HOME environment variable set. According to the docs kubectl will use the config file $(HOME)/.kube/config. But since this variable isn't set on Window it can't locate the file.

I created a HOME variable with the same value as USERPROFILE and it started working.

4
votes

I'm using Hyper-V on Local Windows and I met this error because I didn't configure minikube.

(I know the question is about Azure, not minikube. But this article is on the top for the error message. So, I've put the solution here.)

1. enable Hyper-V.

Type in systeminfo on your Terminal. If you can find the line below,

Hyper-V Requirements:     A hypervisor has been detected. Features required for Hyper-V will not be displayed.

Hyper-V works correctly.

If you can't, enable it from settings.

2. Create Hyper-V Network Switch

Open Hyper-V manager. (Searching it is the fastest way.)

Next, click your PC name on the left.

Then, you can find Virtual Switch Manager menu on the right.

Click it and choose External Virtual Switch with name: "Minikube Switch"

Click apply to create it.

3. start minikube

Go back to terminal and type in:

minikube start --vm-driver hyperv --hyperv-virtual-switch "Minikube Switch"

For more information, check the steps in this article.

4
votes

In my case, I was shuffling between az aks k8s cluster and local docker-desktop.

So every time I change the cluster context I need to restart the docker, else I get the same described error.

Unable to connect to the server: dial tcp 127.0.0.1:6443: connectex: No connection could be made because the target machine actively refused it.

enter image description here

PS: make sure your cluster is started as shown in this picture showing (Stop local cluster)

4
votes

Check docker is running and you started minikube or whichever cloud kube you using. my issue resolved after running "minikube start --driver=docker"

3
votes

I was facing the same error while firing the command "kubectl get pods"

The issue has been resolved by having following steps below:

a) First find out current-context

kubectl config get-contexts
CURRENT   NAME      CLUSTER   AUTHINFO   NAMESPACE

b) if no context is set then set it manually by using

kubectl config set-context <Your context>

Hope this will help you.

0
votes

I had exactly the same problem even after having correct config (by running an azure cli command).

It seems that kubectl expects HOME env.variable set but it did not exist for me. There is however a solution:

If you add a KUBECONFIG environmental variable that will point to config it will start working.

Example:

setx KUBECONFIG %UserProfile%\.kube\config

When the variable is present kubectl has no troubles reading from file.

P.S. It is an alternative to setting a HOME variable as suggested in another answer.

0
votes

I encountered similar problem:

> kubectl cluster-info
"To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Unable to connect to the server: dial tcp xxx.x.x.x:8080: connectex: No connection could be made because the target machine actively refused it."

> kubectl cluster-info dump
Unable to connect to the server: dial tcp xxx.0.0.x:8080: connectex: No connection could be made because the target machine actively refused it.

This setup was working fine until Docker for Desktop bought it's own copy of kubectl. There are 2 ways to overcome this situation:

1 - Quit / Stop Docker for Desktop while using the cluster

2 - Set KUBECONFIG file path

I tried both the options and they worked.

Found a good source for .kube/config, sending it over here for quick reference:

apiVersion: v1
clusters:
- cluster:
    certificate-authority: fake-ca-file
    server: https://1.2.3.4
  name: development
- cluster:
    insecure-skip-tls-verify: true
    server: https://5.6.7.8
  name: scratch
contexts:
- context:
    cluster: development
    namespace: frontend
    user: developer
  name: dev-frontend
- context:
    cluster: development
    namespace: storage
    user: developer
  name: dev-storage
- context:
    cluster: scratch
    namespace: default
    user: experimenter
  name: exp-scratch
current-context: ""
kind: Config
preferences: {}
users:
- name: developer
  user:
    client-certificate: fake-cert-file
    client-key: fake-key-file
- name: experimenter
  user:
    password: some-password
    username: exp

Reference: https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/
0
votes

Following @ilya-chernomordik, I've added my config path to the System Variable by doing

setx KUBECONFIG "D:\Minikube\Minikube.minikube\config"

I have changed the default Location from C: Drive to D: Drive as i have less space in C.

Now the problem is fixed.

edit: after 5 mins, the api server again stopped. It's been more than 5-6 hours i'm trying to solve this issue. I'm not sure why this problem is happening, even after adding the coreect path.

0
votes

Azure self-hosted agent doesn't have the permission to access Kubernates cluster:

Remove Azure self-hosted agent -  .\config.cmd Remove
configure again ( .\config.cmd) with a user have permission to access Kubernates cluster
0
votes

Essentially this problem occurs if your minikube or kind isn't configured. Just try to restart your minikube or kind. If that doesn't solve your problem then try to restart your hypervisor which minikube uses.

minikube start

This command solved my issue.

0
votes

I am on windows 10, and for me I did not enable kubernetes.

As you can see here, there are no contexts available.

docker-desktop kubernetes context

So go to settings of docker desktop and enable it as follows.

docker-desktop enable kubernetes

Now run a command as follows.

kubectl config get-contexts

Ensure you see something like this.

kubectl contexts on command line

Also you can also try listing the nodes as follows.

kubectl get nodes

enter image description here