0
votes
  • My application is running in namespace A with version X, I am able to access the application endpoint via nginx ingress controller running in same namespace A.
  • I start same application stack with version Y in namespace B and create ingress rules pointing to same kubernetes.io/ingress.class as that of controller running in namespace A.
  • Also, I enable canary annotations, with weight 50 %. When I trying accessing application endpoint via ingress, request is getting distributed across version X running in namespace A and version Y running in namespace B as per the weight specified
  • Now, I change the canary weight to 100 and see all traffic going to version Y in namespace B.
  • All the above is inline to my expectation
  • However, now I delete the application pods from namespace A but keep following as intact

    • a) Service Running in namespace A (for which ingress rule is defined)
    • b) Ingress rule in namespace A Nginx controller running in namespace A
    • c) Namespace B has all pods running with respective service and ingress rules with 100 %
  • When now I try accessing application endpoints, it just fails. I understand there are no active endpoints in namespace A (as pods were deleted) but svc is still available in namespace A and also ingress rules in B has canary enabled with weight 100% , I was expecting that traffic will be routed to pods in namespace B , but that is not happening.

I have compared the configuration of nginx controller before and after deletion of pods in namespace A (with 100 % canary ingress rule intact) using

kubectl exec <nginx-controller-pod-name> -n <namespace> -- curl localhost:10246/configuration/backends

kubectl exec <nginx-controller-pod-name> -n <namespace> -- cat nginx.conf

There is no difference in the o/p before and after deletion of pods in namespace A

NOTE:

  • Nginx ingress image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.2
  • Kubernetes version: 1.12.7

Is this the intended behavior ? I am unable to find what is driving this behavior.

1

1 Answers

1
votes

You need to perform below before you delete the pods in namespace A.

  1. Delete the canary ingress
  2. Point the main application ingress to send traffic to new version.

As described here when you remove pods the endpoints change and endpoints change neither recreate a new nginx.conf file nor reloads it. Rather new list of endpoints sent to a Lua handler running inside Nginx using HTTP POST request. You can check the logs of Lua handler to verify that.In a relatively big clusters with frequently deploying apps this feature saves significant number of Nginx reloads which can otherwise affect response latency, load balancing quality (after every reload Nginx resets the state of load balancing) and so on.When you create a new ingress then it will change nginx.conf and reload it.This should explain why no change in nginx.conf.