10
votes

I have created a GKE private cluster (version: 1.13.6-gke.13) using the following command:

gcloud container clusters create a-cluster-with-user-pass \
 --network vpc-name \
 --subnetwork subnet-name    \
 --enable-master-authorized-networks \
 --username random \
 --password averylongpassword \
 --enable-ip-alias \
 --enable-private-nodes \
 --enable-private-endpoint \
 --master-ipv4-cidr xxx.xx.xx.xx/28 \
 --cluster-version 1.13.6-gke.13 \
 --num-nodes 2 \
 --zone asia-south1-a

I can see that the port (10255) is open in both the nodes (or we can say GCP compute instances) created in the above cluster.

If I create a simple GCP compute instances (so I have in total 3 VM instances) and try to access the internal IP of the GKE node on 10255 port from this VM I am able to access it without any authentication or authorization. Below is the command used to create the GCP compute instance:

gcloud compute instances create vm-name \
 --network vpc-name \
 --subnetwork subnet-name    \
 --zone asia-south1-a

If I send a simple CURL GET request to (xxx.xx.xx.xx:10255/pods) I get tons of information about the pods and applications. As I can see in the documentation of Kubernetes here, it is mentioned that:

--read-only-port int32
     The read-only port for the Kubelet to serve on with no authentication/authorization (set to 0 to disable) (default 10255)

I tried disabling the port by editing kube-config.yaml file in the node by doing an ssh and restarting the kubelet and I was successful. But is this a good approach? I believe there could be multiple issues when xxx.xx.xx.xx:10255/metrics is disabled. Is there way to secure the port? Rather than disabling it?

I see this github issue and I am certain that there is a way to secure this port. I'm not sure how to do that.

I see Kubernetes documentation in general provides us with multiple ways to secure the port. How to do that in Google Kubernetes Engine?

3

3 Answers

3
votes

Kubelet is exposing the collected node metrics using this port. Failure to expose these metrics there might lead to unexpected behavior as the system will be essentially flying blind.

Since GKE is a managed system, you're not really supposed to tweak the kubelet flags as the settings will be reset when a node gets recreated (nodes are based in GCE templates that will not include your own configuration).

As for security, I think is safe to leave that port as is, since you're using a private cluster, meaning that only the resources in the same VPC are allowed to reach the nodes.

2
votes

As Yahir Hernández suggested in his answer that this port is used to expose metrics related to the system that ensures smooth operation. It might not be a good idea to disable this port.

What we need to do is to prevent access to this port from outside the VPC.

Since you are using GKE on GCP. If you are using VPC you can add firewall rules to port (10255) to allow incoming traffic only from the resources on VPC. Disable access to this port from internet.

1
votes

According to the CIS Google Kubernetes Engine (GKE) Benchmark v1.0.0 page 196 and 197, "Recommendations" > "Kubelet":

  • it is recommended (widely applicable, should be applied to almost all environments) to disable the read-only port 10255
  • you can do this by editing the kubelet config file to set readOnlyPort to 0 and then restarting the kubelet service

At the same time, Google mentions (point 4.2.4) that the port is not disabled by default since:

Some GKE monitoring components use the kubelet read-only port to obtain metrics.


😒

The recommendation from the CIS benchmark is tone-deaf and close to worthless.

  • The point of GKE is to not have to manage the kubelets yourself.
  • It's not clear what effects the recommendation will have on GKE's monitoring of your cluster.
  • It's not at all obvious how you would keep the setting permanent in an auto-scaling cluster. (A daemonset running as privileged, with its only purpose to overwrite GKE's kubelet config?)

In my opinion, the best mitigation that you can do is:

  1. Ensure the port is only accessible from inside the VPC.
  2. Set good egress network policies for your pods. (Or in some other way manage your egress traffic.) Avoid allowing pods to allow all egress on all ports.