We have a kubernetes cluster with three worker nodes, which was built manually, borrowing from the 'Kubernetes, the hard way' Tutorial.
Everything on this cluster works as expected for one exception: The scheduler does not - or seems not to - honor the 110 pod per worker node limit.
Example:
Worker Node 1: 60 pods Worker Node 2: 100 pods Worker Node 3: 110 pods
When I want to deploy a new pod, it often happens that the scheduler decides it would be best to schedule the new pod to 'Worker Node 3'. Kubelet refuses to do so, it does honor its 110 pod limitation. The scheduler tries again and again and never succeeds.
I do not understand why this is happening. I think I might be missing some detail about this problem.
From my understanding and what I have read about the scheduler itself, there is no resource or metric for 'amount of pods per node' which is considered while scheduling - or at least I haven't found anything that would suggest otherwise in the Kubernetes Scheduler documentation. Of course the scheduler considers CPU requests/limits, memory requests/limits, disk requests/limits - that's all fine and working. So I don't even know how the scheduler could ever consider the amount of pods used on a worker, but there has to be some kind of functionality doing that, right? Or am I mistaken?
Is my cluster broken? Is there some misconception I have about how scheduling should/does work?
Kubernetes binary versions: v1.17.2
Edit: Kubernetes version