8
votes

I have a Kubernetes cluster that is connected over VPN to an on-premise datacentre. This cluster needs to "expose" Services to other programs running in the datacenter, but not to the Internet.

Currently I've been creating Services with type "NodePort" and then manually creating an Internal (Private) Load balancer to map an endpoint to the Cluster Node/Port combination.

However, this approach has some drawbacks:

  • Having to manually add/remove Nodes from the load balancer (or have some sort of process which "scans" the list of all nodes and makes sure they're attached to the ELB)
  • Having to make sure to delete the ELB when deleting a Service (the "orphan ELB" problem)

Does anyone know of any way to configure Kubernetes to bring up "Internal" load balancers in AWS instead of Externally facing ones and manage them in the same way that it does the External ones?

3

3 Answers

7
votes

Per this thread, apply annotation service.beta.kubernetes.io/aws-load-balancer-internal to the service definition.

kind: Service
apiVersion: v1
metadata:
  name: someService
  annotations:
    - name: service.beta.kubernetes.io/aws-load-balancer-internal
      value: 0.0.0.0/0
11
votes

latest format is

annotations:
      service.beta.kubernetes.io/aws-load-balancer-internal: true
10
votes

The above answer's syntax is invalid in Kubernetes v1.5.2. The correct syntax is:

apiVersion: v1
kind: Service
metadata:
  name: someService
  annotations:
    "service.beta.kubernetes.io/aws-load-balancer-internal": "0.0.0.0/0"