A NodePort
service exposes your service on all <NodeIP:NodePort>
addresses i.e., you can reach your service an any node IP and nodePort combination. This can be natively implemented by kubernetes because kubernetes has an agent (kube-proxy) running on all the nodes to create the required configuration to forward the traffic received on the nodeIP:nodePort to the backing pods.
Few of disadvantages of directly using Nodeport
service are that:
Nodes must be exposed externally: Unless your nodes themselves are exposed externally, you can't receive traffic on the <NodeIP:NodePort>
address.
No node-level load balancing: Clients can connect to the service at <NodeIP:NodePort>
but there is no load balancing of traffic between the nodes unless the clients themselves do it. What if a node goes down? Who would inform the clients not to use the node?
Non-standard port numbers: Since the <NodePort>
remains same on all nodes, it must be a free port on all the nodes. Kubernetes make sure of this by picking a free port in the node port range which by default is 30000-32767
. But I might want to expose the service on a more standard port like 80, 443 etc.
We can solve all of the above problems if we have an entity that receives the traffic on an externally reachable IP on a port of our liking and then forward the traffic to the NodePort service at <NodePort:NodeIP
. This entity that receives the traffic can also load balance across the nodes by keeping a tab on what all nodes are alive and then distributing the incoming traffic to the backend nodes in the cluster. The cluster nodes can remain in a private network only reachable from this entity.
As this entity is something that lives outside the kubernetes cluster, kubernetes itself cannot create or manage this. But someone that has control at the topology/network level can do it i.e., your cloud provider. They can create this entity to receive the traffic on an externally reachable IP and port of your liking and forward it to your nodePort service. But there are lots of cloud providers. How do you tell a cloud provider in a uniform way that you need this functionality?
Here comes the LoadBalancer
service. When you create a LoadBalancer
service, Kubernetes will create a NodePort
service and inform your cloud provider that you have created a service of type LoadBalancer
. Then your cloud provider will instantiate a load balancer to receive the traffic on an externally reachable IP and forward to the cluster. It will also update the status
section of your LoadBalancer
service with the external IP which then you can find by querying kubernetes api-server (by doing kubectl get svc
for example). Cloud providers usually charge for each LoadBalancer
service that you create.
What happens on vanilla bare-metal (or self provisioned) clusters if you create LoadBalancer
type service?
Kubernetes would do its job of creating the NodePort
service. Since there is no other entity like a cloud provider watching for LoadBalancer
services, there wouldn't be any difference between LoadBalancer
and NodePort
on such clusters. You would see the ExternalIP
field as <Pending>
if you do kubectl get svc
for the LoadBalancer
services.
References:
1. NodePort service
2. LoadBalancer service
3. External connectivity on bare-metal