5
votes

I am trying to further understand what exactly is happening when I provision a private cluster in Google's Kubernetes Engine.

Google provides this example here of provisioning a private cluster where the control plane services (e.g. Kubernetes API) live on the 172.16.0.16/28 subnet.

https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters

gcloud beta container clusters create pr-clust-1 \ --private-cluster \ --master-ipv4-cidr 172.16.0.16/28 \ --enable-ip-alias \ --create-subnetwork ""

When I run this command, I see that:

  • I now have a few gke subnets in my VPC belong to the cluster subnets for nodes and services. These are in the 10.x.x.x/8 range.
  • I don't have any subnets in the 172.16/16 address space.
  • I do have some new pairing rules and routes that seem to be related. For example, there is a new route 'peering-route-a08d11779e9a3276' with a destination address range of '172.16.0.16/28' and next hop 'gke-62d565a060f347e0fba7-3094-3230-peer'. This peering role then points to 'gke-62d565a060f347e0fba7-3094-bb01-net'

bash$ gcloud compute networks subnets list | grep us-west1 default us-west1 default 10.138.0.0/20 gke-insti3-subnet-62d565a0 us-west1 default 10.2.56.0/22

(venv) Christophers-MacBook-Pro:Stratus-Cloud christophermutzel$ gcloud compute networks peerings list NAME NETWORK PEER_PROJECT PEER_NETWORK AUTO_CREATE_ROUTES STATE STATE_DETAILS gke-62d565a060f347e0fba7-3094-3230-peer default gke-prod-us-west1-a-4180 gke-62d565a060f347e0fba7-3094-bb01-net True ACTIVE [2018-08-23T16:42:31.351-07:00]: Connected.

Is gke-62d565a060f347e0fba7-3094-bb01-net a peered VPC in which the Kubernetes management endpoints live (the control plane stuff in the 172.16/16 range) that Google is managing for the GKE service?

Further - how are my requests making it to the Kubernetes API server?

1

1 Answers

12
votes

The Private Cluster feature of GKE depends on the Alias IP Ranges feature of VPC networking, so there are multiple things happening when you create a private cluster:

  • The --enable-ip-alias flag tells GKE to use a subnetwork that has two secondary IP ranges: one for pods and one for services. This allows the VPC network to understand all the IP addresses in your cluster and route traffic appropriately.

  • The --create-subnetwork flag tells GKE to create a new subnetwork (gke-insti3-subnet-62d565a0 in your case) and choose its primary and secondary ranges automatically. Note that you could instead choose the secondary ranges yourself with --cluster-ipv4-cidr and --services-ipv4-cidr. Or you could even create the subnetwork yourself and tell GKE to use it with the flags --subnetwork, --cluster-secondary-range-name, and --services-secondary-range-name.

  • The --private-cluster flag tells GKE to create a new VPC network (gke-62d565a060f347e0fba7-3094-bb01-net in your case) in a Google-owned project and connect it to your VPC network using VPC Network Peering. The Kubernetes management endpoints live in the range you specify with --master-ipv4-cidr (172.16.0.16/28 in your case). An Internal Load Balancer is also created in the Google-owned project and this is what your worker nodes communicate with. This ILB allows traffic to be load-balanced across multiple VMs in the case of a Regional Cluster. You can find this internal IP address as the privateEndpoint field in the output of gcloud beta container clusters describe. The important thing to understand is that all communication between master VMs and worker node VMs happens over internal IP addresses, thanks to the VPC peering between the two networks.

  • Your private cluster also has an external IP address, which you can find as the endpoint field in the output of gcloud beta container clusters describe. This is not used by the worker nodes, but is typically used by customers to manage their cluster remotely, e.g., using kubectl.

  • You can use the Master Authorized Networks feature to restrict which IP ranges (both internal and external) have access to the management endpoints. This feature is strongly recommended for private clusters, and is enabled by default when you create the cluster using the gcloud CLI.

Hope this helps!