I have a few Kubernetes clusters with different #of nodes in each. And my deployment of config has "replicas: #nodes". There is no specific config set up for scheduling that pod but after deployment, I see strange behavior in terms of the distribution of pods on nodes.
Example:
For 30 nodes cluster (30 replicas) all 30 pod replicas distributed across 25 nodes only and other 5 nodes are sitting ideal in the cluster. Similar cases for many other different clusters and this count varies in every new/redeployment.
Question:
I want to distribute my pod replicas across all nodes. If I set "replicas: #nodes" then I should have one pod replica in each node. If I increase/double the replicas count then it should distribute evenly. is there any specific configuration in deployment yaml for Kubernetes?
Configuration with node AntiAffinity, but still this one is behaving as above. Tried with "requiredDuringSchedulingIgnoredDuringExecution" and that one did deployed one pod in each node, but if I increase the replicas or any node goes down during the deployment then then whole deployment fails.
metadata:
labels:
app: test1
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- test1
topologyKey: kubernetes.io/hostname
podAntiAffinity
withpreferredDuringSchedulingIgnoredDuringExecution
which weighs the scheduling towards not putting a pod on a node where pods with a certain label already exists. It does not fail scheduling if it's not possible to schedule on a different node, just prefers a bit extra not to. – Joachim Isaksson