I have a simple ingress resource and two services: ess-index and ess-query. Services has been exposed with type NodePort
with --session-afinity=None
. Ingress resource has the following structure:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ess-ingress
spec:
backend:
serviceName: ess-query
servicePort: 2280
rules:
- http:
paths:
- path: /api/index
backend:
serviceName: ess-index
servicePort: 2280
Created services will have proxy-mode iptables. When I expose these services as a NodePort
kubernetes master will allocate a port from a flag-configured range, and each Node will proxy that port into the ess-index or ess-query service respectively.
So, when I POST ingress with
kubectl create -f ingress.yaml
it will cause the following behaviour: will be automatically created GLBC controller, that manages the following GCE resource graph (Global Forwarding Rule -> TargetHttpProxy -> Url Map -> Backend Service -> Instance Group). It should appear as a pod, according to the documentation, but i can't see it in the following command output:kubectl get pods --namespace=kube-system
. Here's the sample output My question is: what is the default load balancing algorithm for this loadbalancer? What happens when traffic routes to the appropriate backend? Is my understand correct that default algorithm is not round robin and, according to the Service
docs, is random distributed (maybe based on some hash of source/destination IP, etc.)? This is important because in my case all traffic goes from small number of machines with fixed IP, so i can see the nonuniform traffic distribution on my backend instances. If so, what is the proper way to get the round robin behaviour? As far as i understand i can choose from two variants:
- Custom ingress controller. Pros: it can automatically detects pod restarts/etc., cons: can't support advanced l7 features that i may need in the future (like session persistence)
- Delete ingress and use build it yourself solution like mentioned here: https://www.nginx.com/blog/load-balancing-kubernetes-services-nginx-plus/ Pros: fully customisable, cons: you should take care if pods restarts, etc. by yourself.