As we can see in this documentation. A private cluster is accessible by the VMs (GCP compute instances) in the same subnet by default. Here is what is mentioned in the docs :
From other VMs in the cluster's VPC network:
Other VMs can use kubectl to communicate with the private endpoint
only if they are in the same region as the cluster
and their internal IP addresses are included in
the list of master authorized networks.
I have tested this:
- the cluster is accessible from VMs in the same subnetwork as the cluster
- the cluster is not accessible from the VMs in different subnets.
How does this private cluster figure out which VMs to give access and which VMs to reject?