11
votes

We had setup kubernetes 1.10.1 on CoreOS with three nodes. Setup is successfull

NAME                STATUS    ROLES     AGE       VERSION
node1.example.com   Ready     master    19h       v1.10.1+coreos.0
node2.example.com   Ready     node      19h       v1.10.1+coreos.0
node3.example.com   Ready     node      19h       v1.10.1+coreos.0

NAMESPACE     NAME                                        READY     STATUS    RESTARTS   AGE
default            pod-nginx2-689b9cdffb-qrpjn       1/1       Running   0          16h
kube-system   calico-kube-controllers-568dfff588-zxqjj    1/1       Running   0          18h
kube-system   calico-node-2wwcg                           2/2       Running   0          18h
kube-system   calico-node-78nzn                           2/2       Running   0          18h
kube-system   calico-node-gbvkn                           2/2       Running   0          18h
kube-system   calico-policy-controller-6d568cc5f7-fx6bv   1/1       Running   0          18h
kube-system   kube-apiserver-x66dh                        1/1       Running   4          18h
kube-system   kube-controller-manager-787f887b67-q6gts    1/1       Running   0          18h
kube-system   kube-dns-79ccb5d8df-b9skr                   3/3       Running   0          18h
kube-system   kube-proxy-gb2wj                            1/1       Running   0          18h
kube-system   kube-proxy-qtxgv                            1/1       Running   0          18h
kube-system   kube-proxy-v7wnf                            1/1       Running   0          18h
kube-system   kube-scheduler-68d5b648c-54925              1/1       Running   0          18h
kube-system   pod-checkpointer-vpvg5                      1/1       Running   0          18h

But when i tries to see the logs of any pods kubectl gives the following error:

kubectl logs -f pod-nginx2-689b9cdffb-qrpjn error: You must be logged in to the server (the server has asked for the client to provide credentials ( pods/log pod-nginx2-689b9cdffb-qrpjn))

And also trying to get inside of the pods (using EXEC command of kubectl) gives following error:

kubectl exec -ti pod-nginx2-689b9cdffb-qrpjn bash error: unable to upgrade connection: Unauthorized

Kubelet Service File :

Description=Kubelet via Hyperkube ACI
[Service]
EnvironmentFile=/etc/kubernetes/kubelet.env
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/run/kubelet-pod.uuid \
  --volume=resolv,kind=host,source=/etc/resolv.conf \
  --mount volume=resolv,target=/etc/resolv.conf \
  --volume var-lib-cni,kind=host,source=/var/lib/cni \
  --mount volume=var-lib-cni,target=/var/lib/cni \
  --volume var-log,kind=host,source=/var/log \
  --mount volume=var-log,target=/var/log"
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/checkpoint-secrets
ExecStartPre=/bin/mkdir -p /etc/kubernetes/inactive-manifests
ExecStartPre=/bin/mkdir -p /var/lib/cni
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/run/kubelet-pod.uuid
ExecStart=/usr/lib/coreos/kubelet-wrapper \
  --kubeconfig=/etc/kubernetes/kubeconfig \
  --config=/etc/kubernetes/config \
  --cni-conf-dir=/etc/kubernetes/cni/net.d \
  --network-plugin=cni \
  --allow-privileged \
  --lock-file=/var/run/lock/kubelet.lock \
  --exit-on-lock-contention \
  --hostname-override=node1.example.com \
  --node-labels=node-role.kubernetes.io/master \
  --register-with-taints=node-role.kubernetes.io/master=:NoSchedule
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/kubelet-pod.uuid
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target

KubeletConfiguration File

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
staticPodPath: "/etc/kubernetes/manifests"
clusterDomain: "cluster.local"
clusterDNS: [ "10.3.0.10" ]
nodeStatusUpdateFrequency: "5s"
clientCAFile: "/etc/kubernetes/ca.crt"

We have also specified "--kubelet-client-certificate" and "--kubelet-client-key" flags into kube-apiserver.yaml files:

- --kubelet-client-certificate=/etc/kubernetes/secrets/apiserver.crt
- --kubelet-client-key=/etc/kubernetes/secrets/apiserver.key

So what we are missing here? Thanks in advance :)

5
How have you setup your kubectl? What does yout kube config file look like? Are you using the correct client/server certificates? What does kubectl version give you?ffledgling

5 Answers

2
votes

Looks like you misconfigured kublet:

You missed the --client-ca-file flag in your Kubelet Service File

That’s why you can get some general information from master, but can’t get access to nodes.

This flag is responsible for certificate; without this flag, you can not get access to the nodes.

1
votes

This is a quiet common and general error which is related to authentication problems against the API Server.

I beleive many people search for this title so I'll provide a few directions with examples for different types of cases.

1 ) (General)
Common to all types of deployments - check if credentials were expired.

2 ) (Pods and service accounts)
The authentication is related to one of the pods which is using a service account that has issues like invalid token.

3 ) (IoC or deployment tools)
Running with an IoC tool like Terraform and you failed to pass the certificate correctly like in this case.

4 ) (Cloud or other Sass providers)
A few cases which I encountered with AWS EKS:

4.A) In case you're not the cluster creator - you might have no permissions to access cluster.

When an EKS cluster is created, the user (or role) that creates the cluster is automatically granted with the system:master permissions in the cluster's RBAC configuration. Other users or roles that needs the ability to interact with your cluster, need to be added explicitly - Read more in here.

4.B) If you're working on multiple clusters/environments/accounts via the CLI, the current profile that is used needs to be re-authenticated or that there is a mismatch between the cluster that need to be accessed and the values of shell variables like: AWS_DEFAULT_PROFILE or AWS_DEFAULT_REGION.

4.C) New credentials (AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY) were created and exported to the terminal which might contain old values of previous session (AWS_SESSION_TOKEN) and need to be replaced or unset.


0
votes

In general, many different .kube/config file errors will trigger this error message. In my case it was that I simply specified the wrong cluster name in my config file (and spent MANY hours trying to debug it).

When I specified the wrong cluster name, I received 2 requests for MFA token codes, followed by the error: You must be logged in to the server (the server has asked for the client to provide credentials) message.

Example:

# kubectl create -f scripts/aws-auth-cm.yaml
Assume Role MFA token code: 123456
Assume Role MFA token code: 123456
could not get token: AccessDenied: MultiFactorAuthentication failed with invalid MFA one time pass code.
0
votes

For me the issue was related to mis-configuration in the ~/.kube/config file , after restoring the configurations using kubectl config view --raw >~/.kube/config it was resolved

-1
votes

In my case, I experienced multiple errors while trying to run different kubectl commands like unauthorized, server has asked client to provide credentials, etc. After spending a few hours, I deduced that the sync to my cluster on cloud somehow gets messed up. So I run the following commands to refresh the configuration and it starts to work again:

  1. Unset users:

    kubectl config unset users.<full-user-name-as-found-in: kubectl config view>

  2. Remove cluster:

    kubectl config delete-cluster <full-cluster-name-as-found-in: kubectl config view>

  3. Remove context:

    kubectl config delete-context <full-context-name-as-found-in: kubectl config view>

  4. Default context:

    kubectl config use-context contexts

  5. Get fresh cluster config from cloud:

    ibmcloud cs cluster config --cluster <cluster-name>

Note: I am using ibmcloud for my cluster so last command could be different in your case