3
votes

I have many tenants running on one Kubernetes cluster (on AWS), where every tenant has one Pod that exposes one TCP port (not HTTP) and one UDP port.

  • I don't need load balancing capabilities.
  • The approach should expose an IP address that is externally available with a dedicated port for each tenant
  • I don't want to expose the nodes directly to the internet

I have the following service so far:

apiVersion: v1
kind: Service
metadata:
  name: my-service
  labels:
    app: my-app
spec:
  type: NodePort
  ports:
    - port: 8111
      targetPort: 8111
      protocol: UDP
      name: my-udp
    - port: 8222
      targetPort: 8222
      protocol: TCP
      name: my-tcp
  selector:
    app: my-app

What is the way to go?

2
You need to use kubernetes services. Here's a detailed answer I wrote for a similar question that might be useful to you - stackoverflow.com/a/50080291/1220089. I'm not the one who downvoted, but I suspect your question is a duplicate.ffledgling
You can use Ingress (it is supported on GKE but I don't know about AWS) and attach that to reverse proxy.Nitb

2 Answers

2
votes
  • Deploy a NGINX ingress controller on your AWS cluster
  • Change your service my-service type from NodePort to ClusterIP
  • Edit the configMap tcp-services in the ingress-nginx namespace adding :
data:
  "8222": your-namespace/my-service:8222
  • Same for configMap udp-services :
data:
  "8111": your-namespace/my-service:8111

Now, you can access your application externally using the nginx-controller IP <ip:8222> (TCP) and <ip:8111> (UDP)

-1
votes

The description provided by @ffledgling is what you need.

But I have to mention that if you want to expose ports, you have to use a load balancer or expose nodes to the Internet. For example, you can expose a node to the Internet and allow access only to some necessary ports.