I have installed Elastic/Kibana/Logstash using official helm charts with customized values.yaml on a K3s cluster. If I run kubectl get nodes, I get a list of the cluster nodes correctly. However, when I run kubectl get pods -o, I see all the pods are assigned to only one of the nodes and the remaining nodes are not utilized.
I have tried ➜ ~ kubectl scale --replicas=2 statefulset elasticsearch-master It attempts to schedule the new pods on the same node and triggers pod anti/affinity.
The number of nodes on Kibana stack monitoring is always only 1. The storage is also limited to the first node ephemeral disk.
Should I label the unused cluster nodes explicitly before elastic can start using them?