3
votes

I've installed Kubernetes on ubuntu 18.04 using this article. Everything is working fine and then I tried to install Kubernetes dashboard with these instructions.

Now when I am trying to run kubectl proxy then the dashboard is not cumming up and it gives following error message in the browser when trying to access it using default kubernetes-dashboard URL.

http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {

  },
  "status": "Failure",
  "message": "no endpoints available for service \"https:kubernetes-dashboard:\"",
  "reason": "ServiceUnavailable",
  "code": 503
}

Following commands give this output where kubernetes-dashboard shows status as CrashLoopBackOff

$> kubectl get pods --all-namespaces

NAMESPACE              NAME                                         READY   STATUS             RESTARTS   AGE
default                amazing-app-rs-59jt9                         1/1     Running            5          23d
default                amazing-app-rs-k6fg5                         1/1     Running            5          23d
default                amazing-app-rs-qd767                         1/1     Running            5          23d
default                amazingapp-one-deployment-57dddd6fb7-xdxlp   1/1     Running            5          23d
default                nginx-86c57db685-vwfzf                       1/1     Running            4          22d
kube-system            coredns-6955765f44-nqphx                     0/1     Running            14         25d
kube-system            coredns-6955765f44-psdv4                     0/1     Running            14         25d
kube-system            etcd-master-node                             1/1     Running            8          25d
kube-system            kube-apiserver-master-node                   1/1     Running            42         25d
kube-system            kube-controller-manager-master-node          1/1     Running            11         25d
kube-system            kube-flannel-ds-amd64-95lvl                  1/1     Running            8          25d
kube-system            kube-proxy-qcpqm                             1/1     Running            8          25d
kube-system            kube-scheduler-master-node                   1/1     Running            11         25d
kubernetes-dashboard   dashboard-metrics-scraper-7b64584c5c-kvz5d   1/1     Running            0          41m
kubernetes-dashboard   kubernetes-dashboard-566f567dc7-w2sbk        0/1     CrashLoopBackOff   12         41m

$> kubectl get services --all-namespaces

NAMESPACE              NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
default                kubernetes                  ClusterIP   ----------      <none>        443/TCP                  25d
default                nginx                       NodePort    ----------    <none>        80:32188/TCP             22d
kube-system            kube-dns                    ClusterIP   ----------      <none>        53/UDP,53/TCP,9153/TCP   25d
kubernetes-dashboard   dashboard-metrics-scraper   ClusterIP   ----------   <none>        8000/TCP                 24d
kubernetes-dashboard   kubernetes-dashboard        ClusterIP   ----------    <none>        443/TCP                  24d



$ kubectl get services --all-namespaces
NAMESPACE              NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
default                kubernetes                  ClusterIP   ======       <none>        443/TCP                  25d
default                nginx                       NodePort    ======    <none>        80:32188/TCP             22d
kube-system            kube-dns                    ClusterIP   ======      <none>        53/UDP,53/TCP,9153/TCP   25d
kubernetes-dashboard   dashboard-metrics-scraper   ClusterIP   ======   <none>        8000/TCP                 24d
kubernetes-dashboard   kubernetes-dashboard        ClusterIP   ======    <none>        443/TCP                  24d

$ kubectl get events -n kubernetes-dashboard

LAST SEEN   TYPE      REASON    OBJECT                                      MESSAGE
24m         Normal    Pulling   pod/kubernetes-dashboard-566f567dc7-w2sbk   Pulling image "kubernetesui/dashboard:v2.0.0-rc2"
4m46s       Warning   BackOff   pod/kubernetes-dashboard-566f567dc7-w2sbk   Back-off restarting failed container

$ kubectl describe services kubernetes-dashboard -n kubernetes-dashboard

Name:              kubernetes-dashboard
Namespace:         kubernetes-dashboard
Labels:            k8s-app=kubernetes-dashboard
Annotations:       kubectl.kubernetes.io/last-applied-configuration:
                     {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"k8s-app":"kubernetes-dashboard"},"name":"kubernetes-dashboard"...
Selector:          k8s-app=kubernetes-dashboard
Type:              ClusterIP
IP:                10.96.241.62
Port:              <unset>  443/TCP
TargetPort:        8443/TCP
Endpoints:         
Session Affinity:  None
Events:            <none>

$ kubectl logs kubernetes-dashboard-566f567dc7-w2sbk -n kubernetes-dashboard

> 2020/01/29 16:00:34 Starting overwatch 2020/01/29 16:00:34 Using
> namespace: kubernetes-dashboard 2020/01/29 16:00:34 Using in-cluster
> config to connect to apiserver 2020/01/29 16:00:34 Using secret token
> for csrf signing 2020/01/29 16:00:34 Initializing csrf token from
> kubernetes-dashboard-csrf secret panic: Get
> https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf:
> dial tcp 10.96.0.1:443: i/o timeout
> 
> goroutine 1 [running]:
> github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc0003dac80)
>         /home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:40
> +0x3b4 github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...)
>         /home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:65
> github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc000534200)
>         /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:494
> +0xc7 github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0xc000534200)
>         /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:462
> +0x47 github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...)
>         /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:543
> main.main()
>         /home/travis/build/kubernetes/dashboard/src/app/backend/dashboard.go:105
> +0x212

Any suggestions to fix this? Thanks in advance.

1
logs from the pod which is in crashloopbackoff?Arghya Sadhu
@ArghyaSadhu same with kubectl logs kubernetes-dashboard-566f567dc7-w2sbk Error from server (NotFound): pods "kubernetes-dashboard-566f567dc7-w2sbk" not foundKundan
what does kubectl get events -n kubernetes-dashboard and kubectl describe services kubernetes-dashboard -n kubernetes-dashboard say?Arghya Sadhu
kubectl get events - No resources found in default namespace.Kundan
@CodeRunner add namespace to your kubectl command kubectl logs kubernetes-dashboard-566f567dc7-w2sbk -n kubernetes-dashboardBinaryMonster

1 Answers

1
votes

I noticed that the guide You used to install kubernetes cluster is missing one important part.

According to kubernetes documentation:

For flannel to work correctly, you must pass --pod-network-cidr=10.244.0.0/16 to kubeadm init.

Set /proc/sys/net/bridge/bridge-nf-call-iptables to 1 by running sysctl net.bridge.bridge-nf-call-iptables=1 to pass bridged IPv4 traffic to iptables’ chains. This is a requirement for some CNI plugins to work, for more information please see here.

Make sure that your firewall rules allow UDP ports 8285 and 8472 traffic for all hosts participating in the overlay network. see here .

Note that flannel works on amd64, arm, arm64, ppc64le and s390x under Linux. Windows (amd64) is claimed as supported in v0.11.0 but the usage is undocumented.

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml

For more information about flannel, see the CoreOS flannel repository on GitHub .

To fix this:

I suggest using the command:

sysctl net.bridge.bridge-nf-call-iptables=1

And then reinstall flannel:

kubectl delete -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Update: After verifying the the /proc/sys/net/bridge/bridge-nf-call-iptables value is 1 by default ubuntu-18-04-lts. So issue here is You need to access the dashboard locally.

If You are connected to Your master node via ssh. It could be possible to use -X flag with ssh in order to launch we browser via ForwardX11. Fortunately ubuntu-18-04-lts has it turned on by default.

ssh -X server

Then install local web browser like chromium.

sudo apt-get install chromium-browser
chromium-browser

And finally access the dashboard locally from node.

http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

Hope it helps.