I have created a GKE private cluster (version: 1.13.6-gke.13) using the following command:
gcloud container clusters create a-cluster-with-user-pass \
--network vpc-name \
--subnetwork subnet-name \
--enable-master-authorized-networks \
--username random \
--password averylongpassword \
--enable-ip-alias \
--enable-private-nodes \
--enable-private-endpoint \
--master-ipv4-cidr xxx.xx.xx.xx/28 \
--cluster-version 1.13.6-gke.13 \
--num-nodes 2 \
--zone asia-south1-a
I can see that the port (10255) is open in both the nodes (or we can say GCP compute instances) created in the above cluster.
If I create a simple GCP compute instances (so I have in total 3 VM instances) and try to access the internal IP of the GKE node on 10255 port from this VM I am able to access it without any authentication or authorization. Below is the command used to create the GCP compute instance:
gcloud compute instances create vm-name \
--network vpc-name \
--subnetwork subnet-name \
--zone asia-south1-a
If I send a simple CURL GET request to (xxx.xx.xx.xx:10255/pods) I get tons of information about the pods and applications. As I can see in the documentation of Kubernetes here, it is mentioned that:
--read-only-port int32
The read-only port for the Kubelet to serve on with no authentication/authorization (set to 0 to disable) (default 10255)
I tried disabling the port by editing kube-config.yaml
file in the node by doing an ssh
and restarting the kubelet and I was successful. But is this a good approach? I believe there could be multiple issues when xxx.xx.xx.xx:10255/metrics is disabled. Is there way to secure the port? Rather than disabling it?
I see this github issue and I am certain that there is a way to secure this port. I'm not sure how to do that.
I see Kubernetes documentation in general provides us with multiple ways to secure the port. How to do that in Google Kubernetes Engine?