2
votes

I have 6 HTTP micro-services. Currently they run in a crazy bash/custom deploy tools setup (dokku, mup).

I dockerized them and moved to kubernetes on AWS (setup with kop). The last piece is converting my nginx config.

I'd like

  1. All 6 to have SSL termination (not in the docker image)
  2. 4 need websockets and client IP session affinity (Meteor, Socket.io)
  3. 5 need http->https forwarding
  4. 1 serves the same content on http and https

I did 1. SSL termination setting the service type to LoadBalancer and using AWS specific annotations. This created AWS load balancers, but this seems like a dead end for the other requirements.

I looked at Ingress, but don't see how to do it on AWS. Will this Ingress Controller work on AWS?

Do I need an nginx controller in each pod? This looked interesting, but I'm not sure how recent/relevant it is.

I'm not sure what direction to start in. What will work?

Mike

2
Why not have an nginx k8s service that acts like a gateway to other services? Then configure SSL/redirection for http/websockets as you will?iamnat
@iamnat I know what you mean, but not how to do it. How do I get the service info into the nginx config w/o hard-coding it? Example?Michael Cole
By hard-coding it, you mean without adding k8s svc dns labels? Or are you ok with having rules like proxy_pass my-service.default:8080 in your nginx.conf?iamnat

2 Answers

7
votes

You should be able to use the nginx ingress controller to accomplish this.

  1. SSL termination
  2. Websocket support
  3. http->https
  4. Turn off the http->https redirect, as described in the link above

The README walks you through how to set it up, and there are plenty of examples.

The basic pieces you need to make this work are:

  • A default backend that will respond with 404 when there is no matching Ingress rule
  • The nginx ingress controller which will monitor your ingress rules and rewrite/reload nginx.conf whenever they change.
  • One or more ingress rules that describe how traffic should be routed to your services.

The end result is that you will have a single ELB that corresponds to your nginx ingress controller service, which in turn is responsible for routing to your individual services according to the ingress rules specified.

1
votes

There may be a better way to do this. I wrote this answer because I asked the question. It's the best I could come up with Pixel Elephant's doc links above.

The default-http-backend is very useful for debugging. +1

Ingress

  • this creates an endpoint on the node's IP address, which can change depending on where the Ingress Container is running
  • note the configmap at the bottom. Configured per environment.

(markdown placeholder because no ```)

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: "nginx"
  name: all-ingress
spec:
  tls:
  - hosts:
    - admin-stage.example.io
    secretName: tls-secret
  rules:
  - host: admin-stage.example.io
    http:
      paths:
      - backend:
          serviceName: admin
          servicePort: http-port
        path: /
---
apiVersion: v1
data:
  enable-sticky-sessions: "true"
  proxy-read-timeout: "7200"
  proxy-send-imeout: "7200"
kind: ConfigMap
metadata:
  name: nginx-load-balancer-conf

App Service and Deployment

  • the service port needs to be named, or you may get "upstream default-admin-80 does not have any active endpoints. Using default backend"

(markdown placeholder because no ```)

apiVersion: v1
kind: Service
metadata:
  name: admin
spec:
  ports:
  - name: http-port
    port: 80
    protocol: TCP
    targetPort: http-port
  selector:
    app: admin
  sessionAffinity: ClientIP
  type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: admin
spec:
  replicas: 1
  template:
    metadata:
      labels: 
        app: admin
      name: admin
    spec:
      containers:
      - image: example/admin:latest
        name: admin
        ports:
        - containerPort: 80
          name: http-port
        resources:
          requests:
            cpu: 500m
            memory: 1000Mi
        volumeMounts:
        - mountPath: /etc/env-volume
          name: config
          readOnly: true
      imagePullSecrets:
      - name: cloud.docker.com-pull
      volumes:
      - name: config
        secret:
          defaultMode: 420
          items:
          - key: admin.sh
            mode: 256
            path: env.sh
          - key: settings.json
            mode: 256
            path: settings.json
          secretName: env-secret

Ingress Nginx Docker Image

  • note default-ssl-certificate at bottom
  • logging is great -v below
  • note the Service will create an ELB on AWS which can be used to configure DNS.

(markdown placeholder because no ```)

apiVersion: v1
kind: Service
metadata:
  name: nginx-ingress-service
spec:
  ports:
  - name: http-port
    port: 80
    protocol: TCP
    targetPort: http-port
  - name: https-port
    port: 443
    protocol: TCP
    targetPort: https-port
  selector:
    app: nginx-ingress-service
  sessionAffinity: None
  type: LoadBalancer
---
apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx-ingress-controller
  labels:
    k8s-app: nginx-ingress-lb
spec:
  replicas: 1
  selector:
    k8s-app: nginx-ingress-lb
  template:
    metadata:
      labels:
        k8s-app: nginx-ingress-lb
        name: nginx-ingress-lb
    spec:
      terminationGracePeriodSeconds: 60
      containers:
      - image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
        name: nginx-ingress-lb
        imagePullPolicy: Always
        readinessProbe:
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
        livenessProbe:
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 10
          timeoutSeconds: 1
        # use downward API
        env:
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
        ports:
        - name: http-port
          containerPort: 80
          hostPort: 80
        - name: https-port
          containerPort: 443
          hostPort: 443
        # we expose 18080 to access nginx stats in url /nginx-status
        # this is optional
        - containerPort: 18080
          hostPort: 18080
        args:
        - /nginx-ingress-controller
        - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
        - --default-ssl-certificate=default/tls-secret
        - --nginx-configmap=$(POD_NAMESPACE)/nginx-load-balancer-conf
        - --v=2

Default Backend (this is copy/paste from .yaml file)

apiVersion: v1
kind: Service
metadata:
  name: default-http-backend
  labels:
    k8s-app: default-http-backend
spec:
  ports:
  - port: 80
    targetPort: 8080
    protocol: TCP
    name: http
  selector:
    k8s-app: default-http-backend
---
apiVersion: v1
kind: ReplicationController
metadata:
  name: default-http-backend
spec:
  replicas: 1
  selector:
    k8s-app: default-http-backend
  template:
    metadata:
      labels:
        k8s-app: default-http-backend
    spec:
      terminationGracePeriodSeconds: 60
      containers:
      - name: default-http-backend
        # Any image is permissable as long as:
        # 1. It serves a 404 page at /
        # 2. It serves 200 on a /healthz endpoint
        image: gcr.io/google_containers/defaultbackend:1.0
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 30
          timeoutSeconds: 5
        ports:
        - containerPort: 8080
        resources:
          limits:
            cpu: 10m
            memory: 20Mi
          requests:
            cpu: 10m
            memory: 20Mi

This config uses three secrets:

  • tls-secret - 3 files: tls.key, tls.crt, dhparam.pem
  • env-secret - 2 files: admin.sh and settings.json. Container has start script to setup environment.
  • cloud.docker.com-pull