3
votes

Newbie setup :

  1. Created First project in GCP
  2. Created cluster with default, 3 nodes. Node version 1.7.6. cluster master version 1.7.6-gke.1.
  3. Deployed aan application in a pod, per example.
  4. Able to access "hello world" and the hostname, using the external-ip and the port.
  5. In GCP / GKE webpage of my cloud console, clicked "discovery and loadbalancing", I was able to see the "kubernetes-dashboard" process in green-tick, but cannot access throught the IP listed. tried 8001,9090, /ui and nothing worked.
  6. not using any cloud shell or gcloud commands on my local laptop. Everything is done on console.

Questions :

  1. How can anyone access the kubernetes-dashboard of the cluster created in console?
  2. docs are unclear, are the dashboard components incorporated in the console itself? Are the docs out of sync with GCP-GKE screens?
  3. tutorial says run "kubectl proxy" and then to open
    "http://localhost:8001/ui", but it doesnt work, why?
5
Please try to ask single questions and keep them specific.Maciej Jureczko
Sure, will do. my apologies.MaMuDragon

5 Answers

0
votes
  1. The address of the dashboard service is only accessible from inside of the cluster. If you ssh into a node in your cluster, you should be able to connect to the dashboard. You can verify this by noticing that the address is within the services CIDR range for your cluster.

  2. The dashboard in running as a pod inside of your cluster with an associated service. If you open the Workloads view you will see the kubernetes-dashboard deployment and can see the pod that was created by the deployment. I'm not sure which docs you are referring to, since you didn't provide a link.

  3. When you run kubectl proxy it creates a secure connection from your local machine into your cluster. It works by connecting to your master and then running through a proxy on the master to the pod/service/host that you are connecting to via an ssh tunnel. It's possible that it isn't working because the ssh tunnels are not running; you should verify that your project has newly created ssh rules allowing access from the cluster endpoint IP address. Otherwise, if you could explain more about how it fails, that would be useful for debugging.

2
votes

If you create a cluster with with version 1.9.x or greater, then u can access using tokens.

  1. get secret.

kubectl -n kube-system describe secrets `kubectl -n kube-system get secrets | awk '/clusterrole-aggregation-controller/ {print $1}'` | awk '/token:/ {print $2}'

  1. Copy secret.

  2. kubectl proxy.

  3. Open UI using 127.0.0.1:8001/ui. This will redirect to login page. there will be two options to login, kubeconfig and token. Select token and paste the secret copied earlier.

hope this helps

1
votes

It seems to be an issue with the internal Kubernetes DNS service starting at version 1.7.6 on Google Cloud.

The solution is to access the dashboard at this endpoint instead:

http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard

Github Issue links:

https://github.com/kubernetes/dashboard/issues/2368 https://github.com/kubernetes/kubernetes/issues/52729

0
votes

First : gcloud container clusters get-credentials cluster-1 --zone my-zone --project my-project Then find your kubernetes dashboard endpoint doing : kubectl cluster-info

It will be something like https://42.42.42.42/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy

0
votes
  1. Install kube-dashboard

    kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
    
  2. Run:

    $ kubectl proxy
    
  3. Access:

    http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login