4
votes

We make use of Ingress to create HTTPS load balancers that forward directly to our (typically nodejs) services. However, recently we have wanted more control of traffic in front of nodejs which the Google load balancer doesn't provide.

  • Standardised, custom error pages
  • Standard rewrite rules (e.g redirect http to https)
  • Decouple pod readinessProbes from load balancer health checks (so we can still serve custom error pages when there are no healthy pods).

We use nginx in other parts of our stack so this seems like a good choice, and I have seen several examples of nginx being used to front services in Kubernetes, typically in one of two configurations.

  • An nginx container in every pod forwarding traffic directly to the application on localhost.
  • A separate nginx Deployment & Service, scaled independently and forwarding traffic to the appropriate Kubernetes Service.

What are the pros/cons of each method and how should I determine which one is most appropriate for our use case?

2

2 Answers

2
votes

Following on from Vincent H, I'd suggest pipelining the Google HTTPS Load Balancer to an nginx ingress controller.

As you've mentioned, this can scale independently; have it's own health checks; and you can standardise your error pages.

We've achieved this by having a single kubernetes.io/ingress.class: "gce" ingress object, which has a default backend of our nginx ingress controller. All our other ingress objects are annotated with kubernetes.io/ingress.class: "nginx".

We're using the controller documented here: https://github.com/kubernetes/ingress/tree/master/controllers/nginx. With a custom /etc/nginx/template/nginx.tmpl allowing complete control over the ingress.

For complete transparency, we haven't (yet) setup custom error pages in the nginx controller, however the documentation appears straight forward.

1
votes

One of the requirements listed is that you want to decouple the pod readinessProbes so you can serve custom error pages. If you are adding a nginx container to every pod this isn't possible. Then the pod will be restarted when one of the containers in the pod fails to adhere to the liveness/readiness probes. Personally I also prefer to decouple as much as you can so you can scale independent pods, assign custom machine types if needed, and it saves you some resources by starting only the amount of nginx instances you really need (mostly memory).