0
votes

I have deployed 1 master and 3 nodes on VM's. I can run successfully "kubectl" command on the server's SSH CLI. I can deploy pods, all fine.

But I couldn't find how can I run "kubectl" command from my local and manage the K8S cluster? How can I do that?

Thanks!

2
Can you share the error you're getting. By deploying pods, do you mean running "kubectl create"? If so, you should be good to get pods too. - m8usz
This is not an "error" issue, I can't access my local K8S cluster deployed on our DC. I want to access from my local laptop. Means I want to run kubectl command from my laptop. - yatta
Yes, so when you run "kubectl get ns" what do you get? Because if you can deploy pods, assuming from your local machine, it means you're connected. - m8usz
Run "kubectl config get-contexts", which will list all the clusters you've in your config file. Then run "kubectl config use-context cluster-name" , this will point kubectl to communicate with that cluster. Replace 'cluster-name' with whatever the name is for your local cluster. If it's not listed, it means you haven't got to context file to it. - m8usz
No problem. Run > %USERPROFILE%/.kube/config - this will output your config file. - m8usz

2 Answers

3
votes

You might have a kubeconfig file somewhere on the VMs. You can copy this one to your local device under $HOME/.kube/config, so kubectl knows how to access the cluster.

For more information, see the kubernetes documentation.

1
votes

From your local machine run:

kubectl config get-contexts

Then run the below (replace cluster-name with the cluster name you want to communicate with):

kubectl config use-context cluster-name

If the cluster name you want to communicate with is not listed, it means you haven't got to context file to the cluster.