0
votes

I'm trying to wrap my head around exposing internal loadbalancing to outside world on bare metal k8s cluster.

Let's say we have a basic cluster:

  1. Some master nodes and some worker nodes, that has two interfaces, one public facing (eth0) and one local(eth1) with ip within 192.168.0.0/16 network

  2. Deployed MetalLB and configured 192.168.200.200-192.168.200.254 range for its internal ips

  3. Ingress controller with its service with type LoadBalancer

MetalLB now should assign one of the ips from 192.168.200.200-192.168.200.254 to ingress service, as of my current understanding.

But I have some following questions:

On every node I could curl ingress controller externalIP (as long as they are reachable on eth1) with host header attached and get a response from a service thats configured in coresponding ingress resource or is it valid only on node where Ingress pods are currently placed?

What are my options to pass incoming external traffic to eth0 to an ingress listening on eth1 network?

Is it possible to forward requests saving source ip address or attaching X-Forwarded-For header is the only option?

1

1 Answers

2
votes

Assuming that we are talking about Metallb using Layer2.

Addressing the following questions:

On every node I could curl ingress controller externalIP (as long as they are reachable on eth1) with host header attached and get a response from a service thats configured in coresponding ingress resource or is it valid only on node where Ingress pods are currently placed?

Is it possible to forward requests saving source ip address or attaching X-Forwarded-For header is the only option?

Dividing the solution on the premise of preserving the source IP, this question could go both ways:


Preserve the source IP address

To do that you would need to set the Service of type LoadBalancer of your Ingress controller to support "Local traffic policy" by setting (in your YAML manifest):

  • .spec.externalTrafficPolicy: Local

This setup will be valid as long as on each Node there is replica of your Ingress controller as all of the networking coming to your controller will be contained in a single Node.

Citing the official docs:

With the Local traffic policy, kube-proxy on the node that received the traffic sends it only to the service’s pod(s) that are on the same node. There is no “horizontal” traffic flow between nodes.

Because kube-proxy doesn’t need to send traffic between cluster nodes, your pods can see the real source IP address of incoming connections.

The downside of this policy is that incoming traffic only goes to some pods in the service. Pods that aren’t on the current leader node receive no traffic, they are just there as replicas in case a failover is needed.

Metallb.universe.tf: Usage: Local traffic policy


Do not preserve the source IP address

If your use case does not require you to preserve the source IP address, you could go with the:

  • .spec.externalTrafficPolicy: Cluster

This setup won't require that the replicas of your Ingress controller will be present on each Node.

Citing the official docs:

With the default Cluster traffic policy, kube-proxy on the node that received the traffic does load-balancing, and distributes the traffic to all the pods in your service.

This policy results in uniform traffic distribution across all pods in the service. However, kube-proxy will obscure the source IP address of the connection when it does load-balancing, so your pod logs will show that external traffic appears to be coming from the service’s leader node.

Metallb.universe.tf: Usage: Cluster traffic policy


Addressing the 2nd question:

What are my options to pass incoming external traffic to eth0 to an ingress listening on eth1 network?

Metallb listen by default on all interfaces, all you need to do is to specify the address pool from this eth within Metallb config.

You can find more reference on this topic by following:

An example of such configuration, could be following:

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools: # HERE
    - name: my-ip-space
      protocol: layer2
      addresses:
      - 192.168.1.240/28