I have following setup deployed on an Azure Kubernetes Services (K8S version 1.18.14) cluster:
- Nginx installed via helm chart and scaled down to a single instance. It is deployed in namespace "ingress".
- A simple stateful application (App A) deployed in a separate namespace with 5 replicas. The "statefulness" of the application is represented by a single random int generated at startup. The application exposes one http end point that just returns the random int. It is deployed in namespace "test".
- service A of type ClusterIP exposing the http port of App A and also deployed in namespace "test":
apiVersion: v1
kind: Service
metadata:
name: stateful-service
namespace: "test"
spec:
selector:
app: stateful
ports:
- name: http
port: 80
targetPort: 8080
type: ClusterIP
- service B of type "ExternalName" (proxy service) pointing to the cluster name Service A deployed in namespace "ingress":
apiVersion: "v1"
kind: "Service"
metadata:
name: "stateful-proxy-service"
namespace: "ingress"
spec:
type: "ExternalName"
externalName: "stateful-service.test.svc.cluster.local"
- ingress descriptor for the application with sticky sessions enabled:
apiVersion: extensions/v1beta1
kind: "Ingress"
metadata:
annotations:
kubernetes.io/ingress.class: internal
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "route"
name: "ingress-stateful"
namespace: "ingress"
spec:
rules:
- host: stateful.foo.bar
http:
paths:
- path: /
backend:
serviceName: "stateful-proxy-service"
servicePort: 80
The issue is that sticky sessions is not working correctly with this setup. The "route" cookie is issued but does not guarantee "stickiness". Requests are dispatched to different pods of the backend service although the same sticky session cookie is sent. To be precise the pod changes every 100 requests which seems to be the default round-robin setting - it is the same also without sticky sessions enabled.
I was able to make sticky sessions work when everything is deployed in the same namespace and no "proxy" service is used. Then it is OK - request carrying the same "route" cookie always land on the same pod.
However my setup uses multiple namespaces and using a proxy service is the recommended way of using ingress on applications deployed in other namespaces.
Any ideas how to resolve this?
nginx.ingress.kubernetes.io/affinity-mode
annotation with valuepersistent
as described here? – Wytrzymały Wiktor