0
votes

I'm having trouble setting up my k8s pods exactly how I want. My trouble is that I have multiple containers which listen to the same ports (80,443). In a remote machine, I normally use docker-compose with 'ports - 12345:80' to set this up. With K8s it appears from all of the examples I have found that with a container, the only option is to expose a port, not to proxy it. I know I can use reverse proxies to forward to multiple ports, but that would require the images to use different ports rather than using the same port and having the container forward the requests. Is there a way to do this in k8s?

apiVersion: v1
kind: Service
metadata:
  name: backend
spec:
  loadBalancerIP: xxx.xxx.xxx.xxx
  selector:
    app: app
    tier: backend
  ports:
  - protocol: "TCP"
    port: 80
    targetPort: 80
  type: LoadBalancer
apiVersion: apps/v1
kind: Deployment
metadata:
   name: app-deployment
spec:
  selector:
    matchLabels:
      app: app
      tier: backend
      track: stable
  replicas: 1
  template:
    metadata:
      labels:
        app: app
        tier: backend
        track: stable
    spec:
      containers:
      - name: app
        image: image:example
        ports:
        - containerPort: 80
      imagePullSecrets:
      - name: xxxxxxx

Ideally, I would be able to have the containers on a Node listening to different ports, which the applications running in those containers continue to listen to 80/443, and my services would route to the correct container as necessary.

My load balancer is working correctly, as is my first container. Adding a second container succeeds, but the second container can't be reached. The second container uses a similar script with different names and a different image for deployment.

1
There are three endpoints in this setup, unlike plain Docker: the load balancer endpoint, the cluster-internal service endpoint, and the container/pod endpoint. These don't have to agree, and you could have external port 443 (with TLS termination), service port 80, and container port 8000. Services have their own cluster-internal IP addresses so you can have multiple services that all listen to port 80. Does that help your setup? Which part isn't working?David Maze
I've got a grip on the cluster service and loadbalancer endpoints. My issue is that my images both listen to port 80. My containers should listen to, for example, 5000 and 6000, and then forward the request to their image's port 80. Exposing port 5000 in containerPort will just expose port 5000, which will the route to the image's port 5000, which is not being listed to by the image. I'm happy to make edits for clarity wherever necessary. What I'm really looking for is to mimic docker-compose's ports: -5000:80 functionality.Carson
In your pod spec, do you want to set containerPort: 5000; and in the service, set port: 80, targetPort: 5000? (Which would be the equivalent of Compose ports: ['80:5000'], forwarding service port 80 to container port 5000.) I'm not sure which pieces you're referring to as "container" and "image" here.David Maze
Use kpose github.com/kubernetes/kompose to convert from docker-compose to k2s configs. :)GintsGints
@DavidMaze The pieces I am referring to as container and image are exactly that. An image and a container. The images have API applications which both listen to port 80 for http traffic. A docker-compose file would normally state that each container would receive http traffic at 5000/6000 and forward that to port 80 inside the container. The containers in question here just happen to be in pods from deployments.Carson

1 Answers

1
votes

The answer here is adding a service for the pod where the ports are declared. Using Kompose to convert a docker-compose file, this is the result:

apiVersion: v1
kind: Service
metadata:
  annotations:
    kompose.cmd: pathToKompose.exe convert
    kompose.version: 1.21.0 (992df58d8)
  creationTimestamp: null
  labels:
    io.kompose.service: app
  name: app
spec:
  ports:
  - name: "5000"
    port: 5000
    targetPort: 80
  selector:
    io.kompose.service: app
status:
  loadBalancer: {}

as well as

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    kompose.cmd: pathToKompose.exe convert
    kompose.version: 1.21.0 (992df58d8)
  creationTimestamp: null
  labels:
    io.kompose.service: app
  name: app
spec:
  replicas: 1
  selector:
    matchLabels:
      io.kompose.service: app
  strategy: {}
  template:
    metadata:
      annotations:
        kompose.cmd: pathToKompose.exe convert
        kompose.version: 1.21.0 (992df58d8)
      creationTimestamp: null
      labels:
        io.kompose.service: app
    spec:
      containers:
      - image: image:example
        imagePullPolicy: ""
        name: app
        ports:
        - containerPort: 80
        resources: {}
      restartPolicy: Always
      serviceAccountName: ""
      volumes: null
status: {}

Some of the fluff from Kompose could be removed, but the relevant answer to this question is declaring the port and target port for the pod in a service, and exposing the targetPort as a containerPort in the deployment for the container. Thanks to David Maze and GintsGints for the help!