0
votes

I have following setup deployed on an Azure Kubernetes Services (K8S version 1.18.14) cluster:

  • Nginx installed via helm chart and scaled down to a single instance. It is deployed in namespace "ingress".
  • A simple stateful application (App A) deployed in a separate namespace with 5 replicas. The "statefulness" of the application is represented by a single random int generated at startup. The application exposes one http end point that just returns the random int. It is deployed in namespace "test".
  • service A of type ClusterIP exposing the http port of App A and also deployed in namespace "test":
apiVersion: v1
kind: Service
metadata:
  name: stateful-service
  namespace: "test"
spec:
  selector:
    app: stateful
  ports:
    - name: http
      port: 80
      targetPort: 8080
  type: ClusterIP
  • service B of type "ExternalName" (proxy service) pointing to the cluster name Service A deployed in namespace "ingress":
apiVersion: "v1"
kind: "Service"
metadata:
  name: "stateful-proxy-service"
  namespace: "ingress"
spec:
  type: "ExternalName"
  externalName: "stateful-service.test.svc.cluster.local"
  • ingress descriptor for the application with sticky sessions enabled:
apiVersion: extensions/v1beta1
kind: "Ingress"
metadata:
  annotations:
    kubernetes.io/ingress.class: internal
    nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
    nginx.ingress.kubernetes.io/affinity: "cookie"
    nginx.ingress.kubernetes.io/session-cookie-name: "route"
  name: "ingress-stateful"
  namespace: "ingress"
spec:
  rules:
    - host: stateful.foo.bar
      http:
        paths:
          - path: /
            backend:
              serviceName: "stateful-proxy-service"
              servicePort: 80

The issue is that sticky sessions is not working correctly with this setup. The "route" cookie is issued but does not guarantee "stickiness". Requests are dispatched to different pods of the backend service although the same sticky session cookie is sent. To be precise the pod changes every 100 requests which seems to be the default round-robin setting - it is the same also without sticky sessions enabled.

I was able to make sticky sessions work when everything is deployed in the same namespace and no "proxy" service is used. Then it is OK - request carrying the same "route" cookie always land on the same pod.

However my setup uses multiple namespaces and using a proxy service is the recommended way of using ingress on applications deployed in other namespaces.

Any ideas how to resolve this?

1
Hello @vap78. I am looking into your issue. Have you tried to additionally use the nginx.ingress.kubernetes.io/affinity-mode annotation with value persistent as described here?Wytrzymały Wiktor
@WytrzymałyWiktor Yes - I tried this option too - same effect. As long as the service is of type ExternalName then the session affinity cookie has no effect. BTW - I found a workaround for this specific scenario (although it might not work in all of them) - the ingres can be deployed in the "test" namespace and then it can directly use Service A. Still I'm wondering why wouldn't it work with ExternalName services.vap78
Hi @vap78. Sorry for the late response. Any progress? Do you still need help with this?Wytrzymały Wiktor
sorry - did not notice this question. Up to now the only working solution that I found was to not use a service of type ExternalName for ingresses that need sticky sessions.vap78

1 Answers

1
votes

This is a community wiki answer. Feel free to expand it.

There are two ways to resolve this issue:

  1. Common approach: Deploy your Ingress rules in the same namespace where the app that they configure resides.

  2. Potentially tricky approach: try to use the ExternalName type of Service. You can define ingress and a service with ExternalName type in namespace A, while the ExternalName points to DNS of the service in namespace B. There are two well-written answers explaining this approach in more detail:

Notice the official docs and bear in mind that:

Warning: You may have trouble using ExternalName for some common protocols, including HTTP and HTTPS. If you use ExternalName then the hostname used by clients inside your cluster is different from the name that the ExternalName references.

For protocols that use hostnames this difference may lead to errors or unexpected responses. HTTP requests will have a Host: header that the origin server does not recognize; TLS servers will not be able to provide a certificate matching the hostname that the client connected to.