2
votes

In kubernetes, I have the following service:

apiVersion: v1
kind: Service
metadata:
  name: test-service
  namespace: default
spec:
  ports:
    - name: tcp
      protocol: TCP
      port: 5555
      targetPort: 5555    
    - name: udp
      protocol: UDP
      port: 5556
      targetPort: 5556
  selector:
    tt: test

Which exposes two ports, 5555 for TCP and 5556 for UDP. How can expose these ports externally using the same ingress? I tried using nginx to do something like the following but it doesn't work. It complains that mixed ports are not supported.

apiVersion: v1
kind: ConfigMap
metadata:
  name: tcp-services
  namespace: ingress-nginx
data:
  5555: "default/test-service:5555"
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: udp-services
  namespace: ingress-nginx
data:
  5556: "default/test-service:5556"
---
apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  type: LoadBalancer
  ports:
    - name: tcp
      port: 5555
      targetPort: 5555
      protocol: TCP
    - name: udp
      port: 5556
      targetPort: 5556
      protocol: UDP
  args:
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

Is there a way to do this?

1
The support for mixed TCP/UDP protocols depends on the cloud provider. Which cloud provider are you using ?Malgorzata
@Malgorzata digital oceanMohamed

1 Answers

0
votes

Most cloud providers do not support UDP load-balancing or mix protocols and might have cloud-specific methods to bypass this issue.

The DigitalOcean CPI does not support mixed protocols in the Service definition, it accepts TCP only for load balancer Services. It is possible to ask for HTTP(S) and HTTP2 ports with Service annotations.

Summary: The DO CPI is the current bottleneck with its TCP-only limitation. As long as it is there the implementation of this feature will have no effect on the DO bills.

See more: do-mixed-protocols.

A simple solution that may solve your problem is to create a reverse proxy system on a standalone server - using Nginx and route UDP and TCP traffic directly to your Kubernetes service.

Follow these two steps:

1. Create a NodePort service application

2. Create a small server instance and run Nginx with LB config on it

Use NodePort type of service which will expose your application on your cluster nodes, and makes them accessible through your node IP on a static port. This type supports multi-protocol services. Read more about services here.

apiVersion: v1
kind: Service
metadata:
  name: test-service
  namespace: default
spec:
  type: NodePort
  ports:
  - name: tcp
    protocol: TCP
    port: 5555
    targetPort: 5555
    nodePort: 30010    
  - name: udp
    protocol: UDP
    port: 5556
    targetPort: 5556
    nodePort: 30011
  selector:
    tt: test

For example this service exposes test pods’ port 5555 through nodeIP:30010 with the TCP protocol and port 5556 through nodeIP:30011 with UDP. Please adjust ports according to your needs, this is just an example.

Then create a small server instance and run Nginx with LB config.

For this step, you can get a small server from any cloud provider. Once you have the server, ssh inside and run the following to install Nginx:

$ sudo yum install nginx

In the next step, you will need your node IP addresses, which you can get by running:

$ kubectl get nodes -o wide.

Note: If you have private cluster without external access to your nodes, you will have to set up a point of entry for this use ( for example NAT gateway).

Then you have to add the following to your nginx.conf (run command $ sudo vi /etc/nginx/nginx.conf):

worker_processes 1;  
events {  
    worker_connections  1024;  
}  
stream {  
  upstream tcp_backend {  
    server <node ip 1>:30010;  
    server <node ip 2>:30010;  
    server <node ip 3>:30010;  
         ...  
  }  
  upstream udp_backend {  
    server <node ip 1>:30011;  
    server <node ip 2>:30011;  
    server <node ip 3>:30011;  
         ...  
  }  
  server {  
      listen 5555;  
      proxy_pass tcp_backend;  
      proxy_timeout 1s; }  
  server {  
      listen 5556 udp;  
      proxy_pass udp_backend;  
      proxy_timeout 1s;  
  }  
}

Now you can start your Nginx server using command:

$ sudo /etc/init.d/nginx start

If you have already started you Nginx server before applying changes to your config file, you have to restart it - execute commands below:

$ sudo netstat -tulpn    # Get the PID of nginx.conf program  
$ sudo kill -2 <PID of nginx.conf>  
$ sudo service nginx restart

And now you have UDP/TCP LoadBalancer which you can access through <server IP>:<nodePort>.

See more: tcp-udp-loadbalancer.