2
votes

I work in a mixed GCE / GKE environment as I am sure many GKE customers do.

Currently, the skyDNS service from the cluster is not exposed GCE hosts in the same project. This ip address range used by the DNS service is different from the normal application service range which is routable from all of GCE (each cluster node gets a route for its own subnet).

I specifically have a headless service in GKE I want to be able to reliably access via DNS by my GCE hosts. As a workaround, I added routing to the node hosting the DNS pod and it works.

However, I fully understand that a simple skyDNS pod restart can break my route.

My question is, can the cluster master add this route to GCE like it does for normal node services or, even better, pull the DNS address from the normal service subnets where routing already works?

Can this be done?

1

1 Answers

0
votes

The DNS service should be no different than other services. The general problem of accessing private GKE services from outside the cluster is not currently solved all that well.

Your "routing to the node" should actually work just fine across skyDNS restarts. If the pod happens to get scheduled somewhere else, kube-proxy + the pod routing rules will get traffic where it needs to go (that's actually one of the options proposed when similar questions (here & here) have been asked).