I'm trying to deploy elk stack in kubernetes cluster with helm, using this chart. When I launch
helm install elk-stack stable/elastic-stack
I receive the following message:
NAME: elk-stack LAST DEPLOYED: Mon Aug 24 07:30:31 2020 NAMESPACE: default STATUS: deployed REVISION: 1 NOTES: The elasticsearch cluster and associated extras have been installed. Kibana can be accessed: * Within your cluster, at the following DNS name at port 9200: elk-stack-elastic-stack.default.svc.cluster.local * From outside the cluster, run these commands in the same shell: export POD_NAME=$(kubectl get pods --namespace default -l "app=elastic-stack,release=elk-stack" -o jsonpath="{.items[0].metadata.name}") echo "Visit http://127.0.0.1:5601 to use Kibana" kubectl port-forward --namespace default $POD_NAME 5601:5601
But when I run
kubectl get pods
the result is:
NAME READY STATUS RESTARTS AGE elk-stack-elasticsearch-client-7fcfc7b858-5f7fw 0/1 Running 0 12m elk-stack-elasticsearch-client-7fcfc7b858-zdkwd 0/1 Running 1 12m elk-stack-elasticsearch-data-0 0/1 Pending 0 12m elk-stack-elasticsearch-master-0 0/1 Pending 0 12m elk-stack-kibana-cb7d9ccbf-msw95 1/1 Running 0 12m elk-stack-logstash-0 0/1 Pending 0 12m
Using kubectl describe pods
command, I see that for elasticsearch pods the problem is:
Warning FailedScheduling 6m29s default-scheduler running "VolumeBinding" filter plugin for pod "elk-stack-elasticsearch-data-0": pod has unbound immediate PersistentVolumeClaims
and for logstash pods:
Warning FailedScheduling 7m53s default-scheduler running "VolumeBinding" filter plugin for pod "elk-stack-logstash-0": pod has unbound immediate PersistentVolumeClaims
Output of kubectl get pv,pvc,sc -A
:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/elasticsearch-data 10Gi RWO Retain Bound default/elasticsearch-data manual 16d NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE default persistentvolumeclaim/claim1 Pending slow 64m default persistentvolumeclaim/data-elk-stack-elasticsearch-data-0 Pending 120m default persistentvolumeclaim/data-elk-stack-elasticsearch-master-0 Pending 120m default persistentvolumeclaim/data-elk-stack-logstash-0 Pending 120m default persistentvolumeclaim/elasticsearch-data Bound elasticsearch-data 10Gi RWO manual 16d default persistentvolumeclaim/elasticsearch-data-elasticsearch-data-0 Pending 17d default persistentvolumeclaim/elasticsearch-data-elasticsearch-data-1 Pending 17d default persistentvolumeclaim/elasticsearch-data-quickstart-es-default-0 Pending 16d default persistentvolumeclaim/elasticsearch-master-elasticsearch-master-0 Pending 17d default persistentvolumeclaim/elasticsearch-master-elasticsearch-master-1 Pending 17d default persistentvolumeclaim/elasticsearch-master-elasticsearch-master-2 Pending 16d NAMESPACE NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE storageclass.storage.k8s.io/slow (default) kubernetes.io/gce-pd Delete Immediate false 66m
Storage class slow
and Persistent volume claim claim1
are my experiments. I create they using kubectl create
and a yaml file, the others is automatically created by helm (I think).
Output of kubectl get pvc data-elk-stack-elasticsearch-master-0 -o yaml
:
apiVersion: v1 kind: PersistentVolumeClaim metadata: creationTimestamp: "2020-08-24T07:30:38Z" finalizers: - kubernetes.io/pvc-protection labels: app: elasticsearch release: elk-stack managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:labels: .: {} f:app: {} f:release: {} f:spec: f:accessModes: {} f:resources: f:requests: .: {} f:storage: {} f:volumeMode: {} f:status: f:phase: {} manager: kube-controller-manager operation: Update time: "2020-08-24T07:30:38Z" name: data-elk-stack-elasticsearch-master-0 namespace: default resourceVersion: "201123" selfLink: /api/v1/namespaces/default/persistentvolumeclaims/data-elk-stack-elasticsearch-master-0 uid: de58f769-f9a7-41ad-a449-ef16d4b72bc6 spec: accessModes: - ReadWriteOnce resources: requests: storage: 4Gi volumeMode: Filesystem status: phase: Pending
Can somebody please help me to fix this problem? Thanks in advance.
kubectl get pv,pvc,sc -A
– Arghya SadhuhostPath
volume? – Arghya Sadhu