0
votes

I have installed Elastic/Kibana/Logstash using official helm charts with customized values.yaml on a K3s cluster. If I run kubectl get nodes, I get a list of the cluster nodes correctly. However, when I run kubectl get pods -o, I see all the pods are assigned to only one of the nodes and the remaining nodes are not utilized.

I have tried ➜ ~ kubectl scale --replicas=2 statefulset elasticsearch-master It attempts to schedule the new pods on the same node and triggers pod anti/affinity.

The number of nodes on Kibana stack monitoring is always only 1. The storage is also limited to the first node ephemeral disk.

Should I label the unused cluster nodes explicitly before elastic can start using them?

1
Could you share the link to the elasticsearc helm chart, the version of the chart and your values.yml file? - antaxify
@antaxify, thank you for your comment, I am using 7.10.2 and I found the error. The error was giving a label to the other nodes on the cluster, I should leave the nodes unlabeled at all. $ kubectl label node ip-X-X-X-X.ec2.internal node-role.kubernetes.io/worker=worker - Mustafa Qamaruddin

1 Answers

1
votes

I found the error. The error was giving a label to the other nodes on the cluster, I should leave the nodes unlabeled at all.

I shouldn't have run:

$ kubectl label node ip-X-X-X-X.ec2.internal node-role.kubernetes.io/worker=worker