0
votes

I have a legacy application we've started running in Kubernetes. The application listens on two different ports, one for the general web page and another for a web service. In the long run we may try to change some of this but for the moment we're trying to get the legacy application to run as is. The current configuration has a single service for both ports:

apiVersion: v1
kind: Service
metadata:
  name: app
spec:
  selector:
    app: my-app
  ports:
  - name: web
    port: 8080
    protocol: TCP
    targetPort: 8080
  - name: service
    port: 8081
    protocol: TCP
    targetPort: 8081

Then I'm using a single ingress to route traffic to the correct service port based on path:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: app
  annotations:
    nginx.ingress.kubernetes.io/upstream-hash-by: "$remote_addr"
spec:
  rules:
  - host: myapp.test.com
    http:
      paths:
      - backend:
          serviceName: app
          servicePort: 8080
        path: /app
      - backend:
          serviceName: app
          servicePort: 8081
        path: /service

This works great for routing. Requests coming into the ingress get routed to the correct service port based on path. However, the problem I have is that for this legacy app to work the requests to both ports 8080 and 8081 need to be routed to the same pod for each client. You can see I tried adding the upstream-hash-by annotation. This seemed to ensure that all requests to 8080 from one client went to the same pod and all requests to 8081 from one client went to the same pod but not that those are the same pod for any one client. When I run with a single pod instance everything is great but when I start spinning up additional pods some clients get /app requests routed to one pod and /service requests to another and in this application that does not currently work. I have tried other annotations in the ingress including nginx.ingress.kubernetes.io/affinity: "cookie" and nginx.ingress.kubernetes.io/affinity-mode: "persistent" as well as trying to add sessionAffinity: ClientIP to the service but so far nothing seems to work. The goal is that all requests to either path get routed to the same pod for any one client. Any help would be greatly appreciated.

1

1 Answers

2
votes

Session persistence settings will only work if you set the kube proxy settings such that it forwards requests to local pod only and not to random pods across the cluster.

you can do this by setting the service level settings to:

service.spec.externalTrafficPolicy: Local

you can read more here:

https://kubernetes.io/docs/tutorials/services/source-ip/

after doing this , you ingress annotations should work. I have tested this with external load balancer only , not with ingress though.

keeping everything else the same and having this service definition should work

apiVersion: v1
kind: Service
metadata:
  name: app
spec:
  externalTrafficPolicy: Local
  selector:
    app: my-app
  ports:
  - name: web
    port: 8080
    protocol: TCP
    targetPort: 8080
  - name: service
    port: 8081
    protocol: TCP
    targetPort: 8081