We presently have a setup where applications within our mesos/marathon cluster want to reach out to services which may or may not reside in our mesos/marathon cluster. Ingress for external traffic into the cluster is accomplished via an Amazon ELB sitting in front of a cluster of Traefik instances, which then chooses the appropriate set of container instances to load-balance to via the incoming HTTP Host header compared against essentially a many-to-one association of configured host headers against a particular container instance. Internal-to-internal traffic is actually handled by this same route as well, as the DNS record that is associated with a given service is mapped to that same ELB both internal to and external to our mesos/marathon cluster. We also give the ability to have multiple DNS records pointing against the same container set.
This setup works, but causes seemingly unnecessary network traffic and load against our ELBs as well as our Traefik cluster, as if the applications in the containers or another component were able to self-determine that the services they wished to call out to were within the specific mesos/marathon cluster they were in, and make an appropriate call to either something internal to the cluster fronting the set of containers, or directly to the specific container itself.
From what I understand of Kubernetes, Kubernetes provides the concept of services, which essentially can act as the front for a set of pods based on configuration for which pods the service should match over. However, I'm not entirely sure of the mechanism by which we can have applications in a Kubernetes cluster know transparently to direct network traffic to the service IPs. I think that some of this can be helped by having Envoy proxy traffic meant for, e.g., <application-name>.<cluster-name>.company.com
to the service name, but if we have a CNAME that maps to that previous DNS entry (say, <application-name>.company.com
), I'm not entirely sure how we can avoid exiting the cluster.
Is there a good way to solve for both cases? We are trying to avoid having our applications' logic have to understand that it's sitting in a particular cluster and would prefer a component outside of the applications to perform the routing appropriately.
If I am fundamentally misunderstanding a particular component, I would gladly appreciate correction!