0
votes

Our company has an external service that needs to connect to an application running in one of our Kubernetes clusters. The external service must connect via an IP address (it cannot use a host name) with a static port (port 80). This is not ideal, but we cannot control the external service.

The only externally exposed service in our cluster runs HAProxy which routes traffic to all of the internal (not publicly exposed) pods/services in the cluster. HAProxy's service is deployed as type LoadBalancer which sits behind an AWS ELB for SSL termination.

The problem comes when I try to deploy HAProxy with a static port. According to the documentation, I should be able to specify "nodePort" in the ports section for service types=NodePort or LoadBalancer:

nodePort:
The port on each node on which this service is exposed when type=NodePort or 
LoadBalancer. Usually assigned by the system. If specified, it will be 
allocated to the service if unused or else creation of the service will fail. 
Default is to auto-allocate a port if the ServiceType of this Service 
requires one.

Note that this Service will be visible as both <NodeIP>:spec.ports[*].nodePort and .spec.clusterIP:spec.ports[*].port. (If the --nodeport-addresses flag in kube-proxy is set, would be filtered NodeIP(s).)

So of course, I tried that with the following k8s configuration file:

apiVersion: v1
kind      : Service
metadata:
    name: haproxy
    labels:
        app : haproxy
        env : {{ .Values.env | lower }}
    annotations:
        dns.alpha.kubernetes.io/external: "{{ .Values.externalUrl | lower }}"
        service.beta.kubernetes.io/aws-load-balancer-ssl-cert                : {{ .Values.sslcert | lower }}
        service.beta.kubernetes.io/aws-load-balancer-ssl-ports               : "443"
        service.beta.kubernetes.io/aws-load-balancer-backend-protocol        : http
        service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout : "500"
        service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: "Environment={{ .Values.env | lower }},Type=k8s"
spec:
    type : LoadBalancer
    ports:
        - name      : https
          port      : 443
          targetPort: 80
          nodePort  : 80
    selector:
        app : haproxy
        env : {{ .Values.env | lower }}

Which produces the following error when deployed with Helm:

Error: UPGRADE FAILED: Service "haproxy" is invalid: spec.ports[0].nodePort: Invalid value: 80:

I simply need to be able to hit the haproxy service by entering the node's ip into my browser, but maybe I'm misplacing the nodePort configuration key. Should it go somewhere else in the configuration file? I've tried moving it to various places under the "ports" section, but that just throws parsing errors.

Thanks in advance! I'm at a complete loss.

1

1 Answers

2
votes

I believe if you just don't try to configure the nodePort it will do what you want.

The port and targetPort are important: they specify in your case that port 443 (either on the ELB or the Kubernetes Service endpoint haproxy.default.svc.cluster.local) will get forwarded to port 80 in the pod. In an AWS environment like you describe, the nodePort is mostly a side effect: you'd use ClusterIP Services to communicate between Pods, and LoadBalancer services for things you want to expose outside the immediate cluster, and generally not make direct connections to nodes themselves.

There's a more specific limitation that the nodePort must be between 30000 and 32767; you can't use it to publish arbitrary ports from nodes, and that's why you're getting the error you're getting.