8
votes

I had sticky session working in my dev environment with minibike with following configurations:

Ingress:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: gl-ingress
  annotations:
    nginx.ingress.kubernetes.io/affinity: cookie
    kubernetes.io/ingress.class: "gce"
    kubernetes.io/ingress.global-static-ip-name: "projects/oceanic-isotope-199421/global/addresses/web-static-ip"
spec:
  backend:
    serviceName: gl-ui-service
    servicePort: 80
  rules:
  - http:
      paths:
      - path: /api/*
        backend:
          serviceName: gl-api-service
          servicePort: 8080

Service:

apiVersion: v1
kind: Service
metadata:
  name: gl-api-service
  labels:
    app: gl-api
  annotations:
    ingress.kubernetes.io/affinity: 'cookie'
spec:
  type: NodePort
  ports:
  - port: 8080
    protocol: TCP
  selector:
    app: gl-api

Now that I have deployed my project to GKE sticky session no longer function. I believe the reason is that the Global Load Balancer configured in GKE does not have session affinity with the NGINX Ingress controller. Anyone have any luck wiring this up? Any help would be appreciated. I wanting to establish session affinity: Client Browser > Load Balancer > Ingress > Service. The actual session lives in the pods behind the service. Its an API Gateway (built with Zuul).

5

5 Answers

5
votes

Session affinity is not available yet in the GCE/GKE Ingress controller.

In the meantime and as workaround, you can use the GCE API directly to create the HTTP load balancer. Note that you can't use Ingress at the same time in the same cluster.

  1. Use NodePort for the Kubernetes Service. Set the value of the port in spec.ports[*].nodePort, otherwise a random one will be assigned
  2. Disable kube-proxy SNAT load balancing
  3. Create a Load Balancer from the GCE API, with cookie session affinity enabled. As backend use the port from 1.
3
votes

Good news! Finally they have support for these kind of tweaks as beta features!

Beginning with GKE version 1.11.3-gke.18, you can use an Ingress to configure these properties of a backend service:

  • Timeout
  • Connection draining timeout
  • Session affinity

The configuration information for a backend service is held in a custom resource named BackendConfig, that you can "attach" to a Kubernetes Service.

Together with other sweet beta-features (like CDN, Armor, etc...) you can find how-to guides here: https://cloud.google.com/kubernetes-engine/docs/how-to/configure-backend-service

2
votes

Based on this: https://github.com/kubernetes/ingress-gce/blob/master/docs/annotations.md there's no annotation available, which could effect the session affinity setting of the Google Cloud LoadBalancer (GCLB), that is created as a result of the ingress creation. As such:

  1. This have to be turned on by hand: either as suggested above by creating the LB yourself, or letting the ingress controller do so and then changing the backend configuration for each backend (either via GUI or gcloud cli). IMHO the later seems faster and less prone to errors. (Tested, and cookie "GCLB" was returned by LB after the config change got propagated automatically, and subsequent requests including the cookie were routed to the same node)
  2. As rightfully pointed out by Matt-y-er: service.spec "externalTrafficPolicy" has to be set to local "Local" to disable forwarding from the Node the GCLB selected to another. However:
  3. One would still need to ensure:
    • The GCLB should not send traffic to nodes, which doesn't run the pod or
    • make sure there's a pod running on all nodes (and only a single pod as the externalTrafficPolicy setting would not prevent loadbalancing over multiple local pods)

With regard to #3,the simple solution:

The more complicated solution (but which allows to have less pods than nodes):

  • It seems, that GCLB's health check doesn't need to be adjusted as Ingress rule definition automatically sets up a healthcheck to the backend (and not to the default healthz service)
  • supply anti-affinity rules to make sure there's at most a single instance of a pod on each node (https://kubernetes.io/docs/concepts/configuration/assign-pod-node/)

Note: The above anti-affinity version was tested on 24th July 2018 with 1.10.4-gke.2 kubernetes version on a 2 node cluster running COS (default GKE VM image)

0
votes

I was trying the gke tutorial for that on version: 1.11.6-gke.6 (the latest availiable). stickiness was not there... the only option that was working was only after sessing externalTrafficPolicy":"Local" on the service...

spec:
  type: NodePort
  externalTrafficPolicy: Local

i opened defect to google about the same, and they accepted it, without commiting on eta. https://issuetracker.google.com/issues/124064870

0
votes

For the BackendConfig of the ingress loadbalancer, documentation can be found here: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features

An example snippet for type generated cookie is :

spec:
  timeoutSec: 1800
  connectionDraining:
    drainingTimeoutSec: 1800
  sessionAffinity:
    affinityType: "GENERATED_COOKIE"
    affinityCookieTtlSec: 1800