After trying with various providers(bare kubernetes, openshift, aws eks) we have found that even if node has enough resources(cpu, ram, hdd) after reaching ~110 pods new pods are hanging in Pending state without any events or errors except the event
"Successfully assigned {namespace}/{pod_name} to {node_name}"
We have tried to search for any related logs in kubelet, scheduler, etc - but there is nothing except this event mentioned earlier.
Did someone succeed in running more than 110 pods per node? What are we doing wrong?
The only thing worth mentioning additionally is that in our case it is not 110 replicas of same pod but a 110 various pods from various deployments/daemon sets. And of course we have tweaked node pod_limit > 110.