1
votes

I need to generate my own SSL certificates for Kubernetes cluster components (apiserver, apiserver-kubelet-client, apiserver-etcd-client, front-proxy-client etc.). The reason for this is because Validity period for those certificates are set to 1 year by default and I need to have validity set to more than one year, because of my business reasons. When I generated my own set of certificates and initialized cluster, everything worked perfectly - PODs in kube-system namespaces started, comunication with apiserver worked. But I encoutered that some commands like kubectl logs or kubectl port-forward or kubectl exec stopped working and started throwing following erros:

kubectl logs <kube-apiserver-pod> -n kube-system
error: You must be logged in to the server (the server has asked for the client to provide credentials ( pods/log <kube-apiserver-pod>))

or

kubectl exec -it <kube-apiserver-pod> -n kube-system sh
error: unable to upgrade connection: Unauthorized`

however docker exec command to log to k8s_apiserver container is working properly.

During my debugging I found out that only self generated apiserver-kubelet-client key/cert file is causing this cluster behaviour.

Bellow is process I used to generate and use my own cert/key pair for apiserver-kubelet-client.

  1. I inicialized kubernetes cluster to set its own certificates into /etc/kubernetes/pki folder by running kubeadm init ...

  2. Make a backup of /etc/kubernetes/pki folder into /tmp/pki_k8s

  3. Open apiserver-kubelet-client.crt with openssl to check all set extentions, CN, O etc.

    openssl x509 -noout -text -in /tmp/pki_k8s/apiserver-kubelet-client.crt

  4. To ensure same extentions and CN,O parameters to appear in certificate generated by myself I created .conf file for extentions and .csr file for CN and O

    cd /tmp/pki_k8s/ cat <<-EOF_api_kubelet_client-ext > apiserver_kubelet_client-ext.conf [ v3_ca ] keyUsage = critical, digitalSignature, keyEncipherment extendedKeyUsage = clientAuth EOF_api_kubelet_client-ext

    openssl req -new -key apiserver-kubelet-client.key -out apiserver-kubelet-client.csr -subj "/O=system:masters,CN=kube-apiserver-kubelet-client"

  5. Finally I generated my own apiserver-kubelet-client.crt. For its generation I reused existing apiserver-kubelet-client.key and ca.crt/ca.key generated by K8S initialization

    openssl x509 -req -in apiserver-kubelet-client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -sha256 -out apiserver-kubelet-client.crt -extensions v3_ca -extfile apiserver_kubelet_client-ext.conf -days 3650

  6. Once I had generated my own apiserver-kubelet-client.crt which overides the previous one generated by k8s initialization script itself, I reset kubernetes cluster by hitting kubeadm reset. This purged /etc/kubernetes folder

  7. copy all certificates into /etc/kubernetes/pki from /tmp/pki_k8s

  8. and reinitialize K8S cluster kubeadm init ...

During that I saw that K8S cluster used already existing certificates stored in /etc/kubernetes/pki for setup.

[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Using the existing ca certificate and key.
[certificates] Using the existing apiserver certificate and key.
[certificates] Using the existing apiserver-kubelet-client certificate and key.
[certificates] Using the existing sa key.
[certificates] Using the existing front-proxy-ca certificate and key.
[certificates] Using the existing front-proxy-client certificate and key.
[certificates] Using the existing etcd/ca certificate and key.
[certificates] Using the existing etcd/server certificate and key.
[certificates] Using the existing etcd/peer certificate and key.
[certificates] Using the existing etcd/healthcheck-client certificate and key.
[certificates] Using the existing apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled

After that, K8S cluster is UP, I can list pods, list description, make deployments etc. however not able to check logs, exec command as described above.

 kubectl get pods -n kube-system
NAME                                           READY     STATUS    RESTARTS   AGE
coredns-78fcdf6894-kjkp9                       1/1       Running   0          2m
coredns-78fcdf6894-q88lx                       1/1       Running   0          2m
...

kubectl  logs <apiserver_pod> -n kube-system -v 7
I0818 08:51:12.435494   12811 loader.go:359] Config loaded from file /root/.kube/config
I0818 08:51:12.436355   12811 loader.go:359] Config loaded from file /root/.kube/config
I0818 08:51:12.438413   12811 loader.go:359] Config loaded from file /root/.kube/config
I0818 08:51:12.447751   12811 loader.go:359] Config loaded from file /root/.kube/config
I0818 08:51:12.448109   12811 round_trippers.go:383] GET https://<HOST_IP>:6443/api/v1/namespaces/kube-system/pods/<apiserver_pod>
I0818 08:51:12.448126   12811 round_trippers.go:390] Request Headers:
I0818 08:51:12.448135   12811 round_trippers.go:393]     Accept: application/json, */*
I0818 08:51:12.448144   12811 round_trippers.go:393]     User-Agent: kubectl/v1.11.0 (linux/amd64) kubernetes/91e7b4f
I0818 08:51:12.462931   12811 round_trippers.go:408] Response Status: 200 OK in 14 milliseconds
I0818 08:51:12.471316   12811 loader.go:359] Config loaded from file /root/.kube/config
I0818 08:51:12.471949   12811 round_trippers.go:383] GET https://<HOST_IP>:6443/api/v1/namespaces/kube-system/pods/<apiserver_pod>/log
I0818 08:51:12.471968   12811 round_trippers.go:390] Request Headers:
I0818 08:51:12.471977   12811 round_trippers.go:393]     Accept: application/json, */*
I0818 08:51:12.471985   12811 round_trippers.go:393]     User-Agent: kubectl/v1.11.0 (linux/amd64) kubernetes/91e7b4f
I0818 08:51:12.475827   12811 round_trippers.go:408] Response Status: 401 Unauthorized in 3 milliseconds
I0818 08:51:12.476288   12811 helpers.go:201] server response object: [{
  "metadata": {},
  "status": "Failure",
  "message": "the server has asked for the client to provide credentials ( pods/log <apiserver_pod>)",
  "reason": "Unauthorized",
  "details": {
    "name": "<apiserver_pod>",
    "kind": "pods/log"
  },
  "code": 401
}]
F0818 08:51:12.476325   12811 helpers.go:119] error: You must be logged in to the server (the server has asked for the client to provide credentials ( pods/log <apiserver_pod>))

See kubelet service file below:

[root@qa053 ~]# cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
Environment="CA_CLIENT_CERT=--client-ca-file=/etc/kubernetes/pki/ca.crt"
Environment="KUBELE=--rotate-certificates=true"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS $KUBELET_CERTIFICATE_ARGS $CA_CLIENT_CERT

Do you have any ideas ? :) Thanks

Best Regard

1
What's the output of kubectl --loglevel=9 logs <kube-apiserver-pod> -n kube-systemAleksandar
Hi @Aleksandar, --loglevel for kubectl is unknown flag, but -v 7 woks, I will edit above question because output is logger than 600 charactersJaroVojtek
Please share your Kubelet Service File.Akar
Hi Akar, see kubelet service config file at the end of the post.JaroVojtek
Just I want to add one comment here. It looks like, apiserver is not able to talk to kubelet service - as apiserver-kubelet-client.crt is used for it. I followed documentation described here: kubernetes.io/docs/setup/certificates. I as Admin, am able to communicate with apiserver (kubectl get pods, etc), also kubelet service is able to communicate with apiserver (PODs are setup and running). But...JaroVojtek

1 Answers

1
votes

I found out reason why it did not worked.

When creating .csr file i used this:

openssl req -new -key apiserver-kubelet-client.key -out apiserver-kubelet-client.csr -subj "/O=system:masters,CN=kube-apiserver-kubelet-client"

But in -subj was wrong formatting which caused problems with parsing right CN from certificate. Instead of "/O=system:masters,CN=kube-apiserver-kubelet-client" it needs to be

openssl req -new -key apiserver-kubelet-client.key -out apiserver-kubelet-client.csr -subj "/O=system:masters/CN=kube-apiserver-kubelet-client"

Certificates generated by both .csr files looks same in terms of -text view. But they act differently.