4
votes

I am running my application on Kubernetes that was provided to me as a black box docker image that runs with a bunch of env vars, volume mounts and (a little more unconventionally) using host port. I discovered - with a lot of pain and sweat - as expected, I can't have more than one pod in my deployment if I ever wish to see the host port function again.

Two things are clear to me: 1. I need to add more pod replicas & 2. I can't use an ingress controller (need to have a separate external IP).

Other points of information are:

  • I am using an external IP (quick solution is a LB service)
  • When I enable host port on Kubernetes, everything works like a charm
  • I am using a single tls certificate that is stored in the PVC that will be shared between my pods.
  • When I disable host port, increase number of replicas and pretend it should work, the pods start running successfully, but the application can't be reached the way I reach it normally, as if it never hears what comes from the user through the loadbalancer (hence I thought setting up a NAT might have something to do with a solution??)

Things I tried:

  • Use NodePort to expose the containerPort, and add replicas (& maybe then set up an ingress for loadbalancing). Problems with this: The port I am trying to map to the host is 80, and it's out range. I need to allow TCP and UDP through, which will require to create 2 separate services each with a different nodePort.
  • Expose any possible port I can think of that might be used through a Loadbalancer service. Problem with this is that the user cannot reach the app for some reason.

My yaml files look something like the following:

deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: x
  name: x
  labels:
    app: x
spec:
  replicas: 1
  selector:
    matchLabels:
      app: x
  template:
    metadata:
      labels:
        app: x
    spec:
      # hostNetwork: true
      containers:
      - name: x
        image: x
        env:
        ...
        volumeMounts:
        ...
        ports:
        - containerPort: 80
      volumes:
      ...
      imagePullSecrets:
      - name: x

service.yaml

apiVersion: v1
kind: Service
metadata:
  labels:
    app: x
  namespace: x
  name: x
spec:
  type: LoadBalancer
  loadBalancerIP: x
  ports:
  - name: out
    port: 8081
    targetPort: 8081
    protocol: TCP
  - name: node
    port: 80
    targetPort: 80
    protocol: TCP
  selector:
    app: x
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: x
  namespace: x
  name: x
spec:
  type: LoadBalancer
  loadBalancerIP: x
  ports:
  - name: out
    port: 8081
    targetPort: 8081
    protocol: UDP
  - name: node
    port: 80
    targetPort: 80
    protocol: UDP
  selector:
    app: x

Problem is, what is the best practice / solution to replace host port netwroking safely?

1

1 Answers

3
votes

After a bit of sweat and tears I figured this out. I found two alternatives to using host networking, both of which give us more freedom to use the host ports in other pods.

1. Map containerPort to hostPort

This method is slightly better than the host networking, because it only claims very specific ports on the host.

Advantages: multiple pods can now use host ports AS LONG AS they are using different host ports. Another advantage is that you can use ports pretty much in any range, eg below 1000 and so forth.

Disadvantages: multiple pods in a single Deployment or Statefulset still cannot co-exist with this configuration as they will be using the same host port. So the "node port not available" error will persist.

deployment.yaml

   ...
    - containerPort": 9000
      hostPort": 9000
   ...

2. Use nodePort in your service, map to containerPort

This was what essentially did it for me. NodePorts allowed to be used in your service configurations range from 30000 to 32767. So there was no way for me to map 8081 and 443 to their corresponding nodePort. So I mapped my 443 containerPort to 30443 node port in my LoadBalancer service, and 8081 containerPort to 30881 node port. I then did a bit of changes in my code (passed these new node ports as env var) for the times where my application needs to know what host port is being used.

Advantages: you can scale up your deployment as much as you would like! You also would not occupy the well known ports in case they are needed later.

Disadvantages: the range (30000 - 32767) is limited. Also no two services can share these nodePorts, so you will only be able to use either the TCP or UDP service. Also you will have to make some changes in your app to work with higher number ports.

service.yaml

  ...
  - name: out
    targetPort: 8081
    port: 30881
    nodePort: 30881
    protocol: TCP
  - name: https
    nodePort: 443
    port: 30443
    targetPort: 30443
    protocol: TCP
  ...

So basically whatever resource that uses the nodePort will be the one, of which, you can only have one, if you are using a specific host port. So if you choose to go with the pod hostPort, you can only have one pod with that port, and if you choose to use the service nodePort, you can only have one service with that port on your node.