0
votes

As we can see in this documentation. A private cluster is accessible by the VMs (GCP compute instances) in the same subnet by default. Here is what is mentioned in the docs :

From other VMs in the cluster's VPC network: 
Other VMs can use kubectl to communicate with the private endpoint
only if they are in the same region as the cluster 
and their internal IP addresses are included in 
the list of master authorized networks.

I have tested this:

  • the cluster is accessible from VMs in the same subnetwork as the cluster
  • the cluster is not accessible from the VMs in different subnets.

How does this private cluster figure out which VMs to give access and which VMs to reject?

2
The nodes in the cluster are auto registered. Any other VM internal-ip addresses you manually add yourself to master authorized networks.eamon1234
No. Other VMs in the same subnet are also able to cluster without adding their internal IPs in the master authorized networks.Amit Yadav

2 Answers

1
votes

It is not controlled by private cluster.

It is controlled by the routing and firewall rules configured for the vpc's subnets. Even within the same vpc, you can disable communication between them by adding a rule.

https://cloud.google.com/vpc/docs/vpc#affiliated_resources

1
votes

The Compute Engine instances (or nodes) in a private cluster are isolated from the internet and have access to the Master API server endpoint for authentication, that is publicly exposed in the Google-managed project. However, resources outside the VPC aren't, by default, allowed to reach said endpoint.

Master Authorized Networks are used to allow the GKE Master API available to the whitelisted external networks/addresses that want to authenticate against it. Is not related to disallow communication within the compute resources in the cluster VPC. For that, you can simply use VPC level firewall rules.