4
votes

everyone.

Please teach me why kubectl get nodes command does not return master node information in full-managed kubernetes cluster.

I have a kubernetes cluster in GKE. When I type kubectl get nodescommand, I get below information.

$ kubectl get nodes
NAME                                      STATUS   ROLES    AGE     VERSION
gke-istio-test-01-pool-01-030fc539-c6xd   Ready    <none>   3m13s   v1.13.11-gke.14
gke-istio-test-01-pool-01-030fc539-d74k   Ready    <none>   3m18s   v1.13.11-gke.14
gke-istio-test-01-pool-01-030fc539-j685   Ready    <none>   3m18s   v1.13.11-gke.14
$ 

Off course, I can get worker nodes information. This information is same with GKE web console. By the way, I have another kubernetes cluster which is constructed with three raspberry pi and kubeadm. When I type kubectl get nodes command to this cluster, I get below result.

$ kubectl get nodes
NAME     STATUS   ROLES    AGE    VERSION
master   Ready    master   262d   v1.14.1
node01   Ready    <none>   140d   v1.14.1
node02   Ready    <none>   140d   v1.14.1
$

This result includes master node information.

I'm curious why I cannot get the master node information in full-managed kubernetes cluster. I understand that the advantage of a full-managed service is that we don't have to manage about the management layer. I want to know how to create a kubernetes cluster which the master node information is not displayed. I tried to create a cluster with "the hard way", but couldn't find any information that could be a hint.

At the least, I'm just learning English now. Please correct me if I'm wrong.

2
Does this answer your question? GKE master nodeWill R.O.F.
Thank you for your information. It's good information, but I want to know that why I cannot get master node information using kubectl get node command. In other word, I want to know what settings cause this result.yu saito

2 Answers

3
votes

It's a good question!

The key is kubelet component of the Kubernetes.
Managed Kubernetes versions run Control Plane components on masters, but they don't run kubelet. You can easily achieve the same on your DIY cluster.

The kubelet is the primary “node agent” that runs on each node. It can register the node with the apiserver using one of: the hostname; a flag to override the hostname; or specific logic for a cloud provider.

https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/

When the kubelet flag --register-node is true (the default), the kubelet will attempt to register itself with the API server. This is the preferred pattern, used by most distros.

https://kubernetes.io/docs/concepts/architecture/nodes/#self-registration-of-nodes

3
votes

Because there are no nodes with that role. The control plane for GKE is hosted within their own magic system, not on your own nodes.