1
votes

my natural thought is that if nginx is just a daemon process on the k8s node, but not a pod(container) in the k8s cluster, looks like it still can fullfill ingress controller jobs. because: if it's a process, because it is on the k8s node, it still can talk to apiserver to fetch service backend pods information, like IP addresses, so it's still can be used as a http proxy server to direct traffic to different services.

so 2 questions,

  1. why nginx ingress controller has to be a pod?
  2. why nginx ingress controller only got 1 replica? and on which node? if nginx controller pod is dead, things will go unstable.

Thanks!

2

2 Answers

2
votes

why Nginx ingress controller has to be a pod?

it is possible to run the Nginx controller as a daemon set in Kubernetes however I am not sure about the running on the node.

Manging the POD using daemon set and deployment of Kubernetes easy compare to process on Node.

By default Nginx daemon process is not part of any Kubernetes node, if you cluster autoscale will you install the Nginx process manually on Node?

If you thinking to create own AMI with Nginx process inside and use it inside the Node pool and scale that pool, it's possible but how about OS patching and maintenance ?

why nginx ingress controller only got 1 replica? and on which node? if nginx controller pod is dead, things will go unstable.

Running replicas 1 is the default configuration but you can implement the HPA and increase the replicas as per need. Nginx is lightweight so handling a large volume of traffic not require more replicas.

Still, as per need, you can run multiple replicas with HPA or increase manually replicas to get high availability.

2
votes

Because Pods are how you run daemon processes (or really, all processes) inside Kubernetes. That's just how you run stuff. I suppose there is nothing stopping you from running it outside the cluster, manually setting up API configuration and authentication, doing all the needed networking bits yourself. But ... why?

As for replicas, you should indeed generally have more than one across multiple physical nodes for redundancy. A lot of the tutorials show it with replicas: 1 because either it's for a single-node dev cluster like Minikube or it's only an example.