6
votes

I have two kubernetes clusters on GKE: one public that handles interaction with the outside world and one private for internal use only.

The public cluster needs to access some services on the private cluster and I have exposed these to the pods of the public cluster through internal load balancers. Currently I'm specifying the internal IP addresses for the load balancers to use and passing these IPs to the public pods, but I would prefer if the load balancers could choose any available internal IP addresses and I could pass their DNS names to the public pods.

Internal load balancer DNS is available for regular internal load balancers that serve VMs and the DNS will be of the form [SERVICE_LABEL].[FORWARDING_RULE_NAME].il4.[REGION].lb.[PROJECT_ID].internal, but is there something available for internal load balancers on GKE? Or is there a workaround that would enable me to accomplish something similar?

3

3 Answers

5
votes

Never heard of built-in DNS for load balancers in GKE, but we do it actually quite simply. We have External DNS Kubernetes service which manages DNS records for various things like load balancers and ingresses. What you may do:

  1. Create Cloud DNS internal zone. Make sure you integrate it with your VPC(s).
  2. Make sure your Kubernetes nodes service account has DNS Administrator (or super wide Editor) permissions.
  3. Install External DNS.
  4. Annotate your internal Load Balancer service with external-dns.alpha.kubernetes.io/hostname=your.hostname.here
  5. Verify that DNS record was created and can be resolved within your VPC.
1
votes

I doubt the "Internal load balancer DNS" route works, but here are some workarounds that come to mind:

1) Ingress: In your public cluster, map all private service names to an ingress controller in your private cluster. The ingress can route the requests per host name to the correct service.

2) Stub domains: Use some common postfix for your private services (for example *.private), and use the private cluster kube-dns to resolve those service names (see https://kubernetes.io/blog/2017/04/configuring-private-dns-zones-upstream-nameservers-kubernetes/)

Example:

apiVersion: v1
kind: ConfigMap
metadata:
  name: kube-dns
  namespace: kube-system
data:
  stubDomains: |
    {"private": ["10.2.3.4"]}
  upstreamNameservers: |
    ["8.8.8.8", "8.8.4.4"]

3) Haven't tried it, but kEdge seems to be another solution to securely communicate between clusters: https://improbable.io/blog/introducing-kedge-a-fresh-approach-to-cross-cluster-communication

1
votes

You can achieve this by assigning internal loadbalancer an ip of the worker node's CIDR. In GKE we provide three CIDR blocks when we create the cluster 1. Worker Node cidr 2. Pod cidr 3. Service Endpoint cidr (ususally used by loadbalancer). The CIDR we provide for Pod sand Service is visibile withing Kubernetes only. Hence its not visible outside.

Instead of using the service endpoint ip for the internal loadbalancer you can assign an ip from Worker Node CIDR which from the subnet in the VPC, so the ip is visible between pods of different clusters.

The downside of this approach is you will be losing one worker node during autoscaling.