1
votes

I have just started with Kubernetes and I have some queries about kubernetes Load balancing approaches and unable to find clear answers in the kubernetes documentation.

First, Lets say we have created a deployment "iis", and scaled it to 3 replicas. Now without creating a service how can we access these endpoints?

Now, we have created a service for this deployment (having 3 replicas) using ClusterIP so its only exposed within the cluster. Now, how will the service loadbalances the traffic coming to this service inside the cluster? Does it uses round robin or random selection of endpoints? According to the documentation of kubernetes, there are 2 service proxies, userspace or iptables, How can I know which one my service is using?

Next, we exposed the service publicly using LoadBalancer. It creates a load balancer on the cloud provider and uses that. My question is that how this external loadbalancer balances the traffic to the pods? Does it balances traffic to the services and services redirects it to the endpoint, OR it balances traffic directly to the endpoints (pods)? Also, in this LoadBalancer case, how the internal traffic (coming from inside the cluster) to this service is load balanced?

Kindly try to give detailed answers.

1

1 Answers

2
votes

First, Lets say we have created a deployment "iis", and scaled it to 3 replicas. Now without creating a service how can we access these endpoints?

Unless you have an out of band solution for this (like a standard load balancer which you register the POD ips in) you can't.. Services are there to ease connections between pods. Use them!

Now, how will the service loadbalances the traffic coming to this service inside the cluster?

In order to understand this, it's worth knowing how services work in Kubernetes.

Services are handled by kube-proxy. Kube-proxy (by default now) creates iptables rules which look a little bit like this:

-A KUBE-SERVICES ! -s 192.168.0.0/16 -d <svc-ip>/32 -p tcp -m comment --comment "namespace/service-name: cluster IP" -m tcp --dport svc-port -j KUBE-MARK-MASQ

What happens is, iptables looks at all the packets destined for the svc-ip and then directs them to the pod IP which is producing the service

If you take a look further at the iptables rules, and search for "probability" - you'll see something like this:

-A KUBE-SVC-AGR3D4D4FQNH4O33 -m comment --comment "default/redis-slave:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-25QOXGEMBWAVOAG5
-A KUBE-SVC-GYQQTB6TY565JPRW -m comment --comment "default/frontend:" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-JZXZFA7HRDMGR3BA
-A KUBE-SVC-GYQQTB6TY565JPRW -m comment --comment "default/frontend:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-ZW5YSZGA33MN3O6G

So the answer is, it's random with some probability weighting. A more thorough explanation of how the probability is weighted can be seen in this github comment

According to the documentation of kubernetes, there are 2 service proxies, userspace or iptables, How can I know which one my service is using?

Again, this is determined by kube-proxy, and is decided when kube-proxy starts up. It's a command line flag on the kube-proxy process. By default, it'll use iptables, and it's highly recommended you stick with that unless you know what you're doing.

My question is that how this external loadbalancer balances the traffic to the pods?

This is entirely dependent on your cloud provider and the LoadBalance you've chosen. What the LoadBalancer service type does it expose the service on a NodePort and then map an external port on the loadbalancer back to that. All the LoadBalancer type does differently is register the node IP's serving the service in the external provider's load balancer, eg: ELB, rather than in an internal clusterIP service. I would recommend reading the docs for your cloud provider to determine this.

Also, in this LoadBalancer case, how the internal traffic (coming from inside the cluster) to this service is load balanced?

Again, see the docs for your cloud provider.