- My application is running in namespace A with version X, I am able to access the application endpoint via nginx ingress controller running in same namespace A.
- I start same application stack with version Y in namespace B and create ingress rules pointing to same kubernetes.io/ingress.class as that of controller running in namespace A.
- Also, I enable canary annotations, with weight 50 %. When I trying accessing application endpoint via ingress, request is getting distributed across version X running in namespace A and version Y running in namespace B as per the weight specified
- Now, I change the canary weight to 100 and see all traffic going to version Y in namespace B.
- All the above is inline to my expectation
However, now I delete the application pods from namespace A but keep following as intact
- a) Service Running in namespace A (for which ingress rule is defined)
- b) Ingress rule in namespace A Nginx controller running in namespace A
- c) Namespace B has all pods running with respective service and ingress rules with 100 %
- When now I try accessing application endpoints, it just fails. I understand there are no active endpoints in namespace A (as pods were deleted) but svc is still available in namespace A and also ingress rules in B has canary enabled with weight 100% , I was expecting that traffic will be routed to pods in namespace B , but that is not happening.
I have compared the configuration of nginx controller before and after deletion of pods in namespace A (with 100 % canary ingress rule intact) using
kubectl exec <nginx-controller-pod-name> -n <namespace> -- curl localhost:10246/configuration/backends
kubectl exec <nginx-controller-pod-name> -n <namespace> -- cat nginx.conf
There is no difference in the o/p before and after deletion of pods in namespace A
NOTE:
- Nginx ingress image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.2
- Kubernetes version: 1.12.7
Is this the intended behavior ? I am unable to find what is driving this behavior.