2
votes

I have a simple ingress resource and two services: ess-index and ess-query. Services has been exposed with type NodePort with --session-afinity=None. Ingress resource has the following structure:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
 name: ess-ingress
spec:
  backend:
    serviceName: ess-query
    servicePort: 2280
  rules:
   - http:
       paths:
       - path: /api/index
         backend:
          serviceName: ess-index
          servicePort: 2280

Created services will have proxy-mode iptables. When I expose these services as a NodePort kubernetes master will allocate a port from a flag-configured range, and each Node will proxy that port into the ess-index or ess-query service respectively. So, when I POST ingress with kubectl create -f ingress.yaml it will cause the following behaviour: will be automatically created GLBC controller, that manages the following GCE resource graph (Global Forwarding Rule -> TargetHttpProxy -> Url Map -> Backend Service -> Instance Group). It should appear as a pod, according to the documentation, but i can't see it in the following command output:kubectl get pods --namespace=kube-system. Here's the sample output My question is: what is the default load balancing algorithm for this loadbalancer? What happens when traffic routes to the appropriate backend? Is my understand correct that default algorithm is not round robin and, according to the Service docs, is random distributed (maybe based on some hash of source/destination IP, etc.)? This is important because in my case all traffic goes from small number of machines with fixed IP, so i can see the nonuniform traffic distribution on my backend instances. If so, what is the proper way to get the round robin behaviour? As far as i understand i can choose from two variants:

  1. Custom ingress controller. Pros: it can automatically detects pod restarts/etc., cons: can't support advanced l7 features that i may need in the future (like session persistence)
  2. Delete ingress and use build it yourself solution like mentioned here: https://www.nginx.com/blog/load-balancing-kubernetes-services-nginx-plus/ Pros: fully customisable, cons: you should take care if pods restarts, etc. by yourself.
1

1 Answers

2
votes

Incorporating kubeproxy and cloud lb algorithms so they cooperate toward a common goal is still a work in progress. Right now, it will end up spraying, over time you get roughly equal distribution but it will not be strictly rr.

If you really want fine grain control over the algorithm, you can deploy the nginx ingress controller and expose it as a Service of Type=lb (or even stick a GCE l7 in front of it). This will give you Ingress semantics, but allows an escape hatch for areas that cloudproviders aren't fully integrated with Kube just yet, like algorithm control. The escape hatch is exposed as annotations or a full config map for the template.