I have an application that uses a lot of CPU. I thought in Kubernetes as a way to distribute the workload among small pieces of work, then I created several CPU-limited pods. It turns out that Docker has a constraint in which it distributes the total amount of CPU between all the containers running CPU-intensive processes (https://docs.docker.com/engine/reference/run/#cpu-share-constraint). For that reason, each pod can not use the whole amount of CPU that it should have since Docker share out resources by itself.
Example:
Environment: Kubernetes Platform with 80 CPU cores available across the cluster
Test1:
- Context: 1 single pod limited to 5 CPU cores
- Processes: 1 single process running in the single pod
- Duration: the single process lasts 0:02:05
Test2:
- Context: 12 pods limited to 5 CPU cores each
- Processes: 12 processes running each on every pod
- Duration: it takes an average of 0:03:55 to process each one
This means that the CPU usage is affected (and then the processing time increases) when there are several containers requesting CPU resources.
I guess that Docker is not intended to be used as I need to.
I understand that in this scenario it would be better to use VM's instead of Docker containers, but Is there a way to make it work (maybe changing Docker or Kubernetes configuration)?
Any helpful comment would be appreciated.