0
votes

I am running nginx on a kubernetes cluster having 3 nodes.

I am wondering if there is any benefit of having for example 4 pods and limit their cpu/mem to approx. 1/4 of the nodes capacity vs running 1 pod per node limiting cpu/mem so that pod can use resources of the whole node (for the sake of simplicity, we leave cubernet services out of the equation).

My feeling is that the fewer pods the less overhead and going for 1 pod per node should be the best in performance?

Thanks in advance

2

2 Answers

1
votes

With more then 1 Pod, you have a certain high availability. Your pod will die at one point, and if it is behind a controller (which is how is must be), it will be re-created, but you will have a small downtime.

Now, take into consideration that if you deploy more then one replica of your app, even though you give it 1/n resources, there is a base image and dependencies that are going to be replicated.

As an example, let's imagine an app that runs on Ubuntu, and has 5 dependencies:

  • If you run 1 replica of this app, you are deploying 1 Ubuntu + 5 dependencies + the app itself.

  • If you are run 4 replicas of this app, you are running 4 Ubuntus + 4*5 dependencies + 4 times the app.

My point is, if your base image would be big, and you would need heavy dependencies, it would be not a linear increase of resources.

Performance-wise, I don't think there is much difference. One of your nodes will be heavily bombed as all your requests will end up there, but if your nodes can handle it, there should be no problem.

0
votes

What you are referring to is the difference between horizontal and vertical scaling. Regarding vertical scaling, you would increase the resources of your application as you see fit. Otherwise, you would scale horizontally by increasing the amount of replicas of your application.

Doing one or the other depends on features that you application may or may not have. In the case of nginx scaling horizontally would split traffic per pod and also per node which would result in a better throughput for your most likely reverse proxy.