1
votes

Unable to access the Kubernetes dashboard. Executed below steps:

  1. kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml

  2. kubectl proxy --address="192.168.56.12" -p 8001 --accept-hosts='^*$'

  3. Now trying to access from url: http://192.168.56.12:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {

  },
  "status": "Failure",
  "message": "no endpoints available for service \"https:kubernetes-dashboard:\"",
  "reason": "ServiceUnavailable",
  "code": 503
}```

Output of a few commands that will required:

[root@k8s-master ~]# kubectl logs kubernetes-dashboard-6bb65fcc49-zn2c2 --namespace=kubernetes-dashboard

Error from server: Get https://192.168.56.14:10250/containerLogs/kubernetes-dashboard/kubernetes-dashboard-6bb65fcc49-7wz6q/kubernetes-dashboard: dial tcp 192.168.56.14:10250: connect: no route to host [root@k8s-master ~]#

$kubectl get pods -o wide --all-namespaces
NAMESPACE              NAME                                        READY   STATUS             RESTARTS   AGE   IP              NODE      
ATES
kube-system            coredns-5c98db65d4-89c9p                    1/1     Running            0          76m   10.244.0.14     k8s-master
kube-system            coredns-5c98db65d4-ggqfj                    1/1     Running            0          76m   10.244.0.13     k8s-master
kube-system            etcd-k8s-master                             1/1     Running            0          75m   192.168.56.12   k8s-master
kube-system            kube-apiserver-k8s-master                   1/1     Running            0          75m   192.168.56.12   k8s-master
kube-system            kube-controller-manager-k8s-master          1/1     Running            1          75m   192.168.56.12   k8s-master
kube-system            kube-flannel-ds-amd64-74zrn                 1/1     Running            1          74m   192.168.56.14   node1     
kube-system            kube-flannel-ds-amd64-hgcp8                 1/1     Running            0          75m   192.168.56.12   k8s-master
kube-system            kube-proxy-2lczb                            1/1     Running            0          74m   192.168.56.14   node1     
kube-system            kube-proxy-8dxdm                            1/1     Running            0          76m   192.168.56.12   k8s-master
kube-system            kube-scheduler-k8s-master                   1/1     Running            1          75m   192.168.56.12   k8s-master
kubernetes-dashboard   dashboard-metrics-scraper-fb986f88d-d49sw   1/1     Running            0          71m   10.244.1.21     node1     
kubernetes-dashboard   kubernetes-dashboard-6bb65fcc49-7wz6q       0/1     CrashLoopBackOff   18         71m   10.244.1.20     node1     

=========================================

[root@k8s-master ~]# kubectl describe pod kubernetes-dashboard-6bb65fcc49-7wz6q -n kubernetes-dashboard
Name:           kubernetes-dashboard-6bb65fcc49-7wz6q
Namespace:      kubernetes-dashboard
Priority:       0
Node:           node1/192.168.56.14
Start Time:     Mon, 23 Sep 2019 12:56:18 +0530
Labels:         k8s-app=kubernetes-dashboard
                pod-template-hash=6bb65fcc49
Annotations:    <none>
Status:         Running
IP:             10.244.1.20
Controlled By:  ReplicaSet/kubernetes-dashboard-6bb65fcc49
Containers:
  kubernetes-dashboard:
    Container ID:  docker://2cbbbc9b95a43a5242abe13f8178dc589487abcfccaea06ff4be70781f4c3711
    Image:         kubernetesui/dashboard:v2.0.0-beta4
    Image ID:      docker-pullable://docker.io/kubernetesui/dashboard@sha256:a35498beec44376efcf8c4478eebceb57ec3ba39a6579222358a1ebe455ec49e
    Port:          8443/TCP
    Host Port:     0/TCP
    Args:
      --auto-generate-certificates
      --namespace=kubernetes-dashboard
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    2
      Started:      Mon, 23 Sep 2019 14:10:27 +0530
      Finished:     Mon, 23 Sep 2019 14:10:28 +0530
    Ready:          False
    Restart Count:  19
    Liveness:       http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /certs from kubernetes-dashboard-certs (rw)
      /tmp from tmp-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-q7j4z (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  kubernetes-dashboard-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubernetes-dashboard-certs
    Optional:    false
  tmp-volume:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  kubernetes-dashboard-token-q7j4z:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubernetes-dashboard-token-q7j4z
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason   Age                        From            Message
  ----     ------   ----                       ----            -------
  Warning  BackOff  <invalid> (x354 over 63m)  kubelet, node1  Back-off restarting failed container
[root@k8s-master ~]#

1
As i can see you have installed 3 flannels and weave-net on your kubernetes cluster, first of all i would recommend to delete all of them and create 1 instead. Additionally please provide a screen after using this command: kubectl describe pod kubernetes-dashboard-6bb65fcc49-zn2c2 -n kubernetes-dashboardJakub
jt97 they are deamonsets. Which means there are 3 nodes.Akin Ozer
@jt97- pasted the output of kubectl describe pod kubernetes-dashboard and kubectl get pods -o wide --all-namespacesmuku
@muku Please provide me 1 more thing, logs. Use this command kubectl logs kubernetes-dashboard-6bb65fcc49-7wz6q -n kubernetes-dashboardJakub
@jt97 its the same error: Error from server: Get 192.168.56.14:10250/containerLogs/kubernetes-dashboard/…: dial tcp 192.168.56.14:10250: connect: no route to hostmuku

1 Answers

0
votes

After realizing that the chart stable/kubernetes-dashboard is outdated, I found that you need to apply this manifest :

kubectl apply -f \
   https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml

However, this is not acceptable to migrate from helm chart to hard coded manifests.. After some search, the related chart is now under this Git repo subfolder No more stable repo, but use the following:

helm repository add kubernetes-dashboard https://kubernetes.github.io/dashboard/
helm install kubernetes-dashboard/kubernetes-dashboard --name my-release

Good luck! This will fix all your issues since this chart considers all dependencies.

By the way:

  • even the image repository is no more k8s.gcr.io/kubernetes-dashboard-amd64 Instead, it is now under dockerhub kubernetesui/dashboard
  • There is a sidecar for metrics scraper which not defined in the stable chart.