0
votes

i am stuck with this issue: i configured kubeadm (cluster on one dedicated server for now). And i installed elasticsearch using helm. it is nearly working fine, except for storage. The chart is using the default StorageClass for dynamic provisioning of PVs.

So i created a default StorageClass (kubernetes.io/gce-pd / pd-standard) and activated the DefaultStorageClass admission plugin in apiserver to enable dynamic provisioning. But this still doesn't work. The pods still have the FailedBinding event "no persistent volumes available for this claim and no storage class is set".

I checked the helm chart of elasticsearch and it does not specify a StorageClass for its PVC, so it should work. Also, i'm missing something else: i can't understand where kubernetes will allocate the PV on disks, i never configured it anywhere. And it's not in the StorageClass too.

I've checked that the dynamic provisioning is working, as it inserts the default StorageClass in the PVC definition:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/gce-pd
  creationTimestamp: "2019-12-19T10:37:04Z"
  finalizers:
  - kubernetes.io/pvc-protection
  labels:
    app: kibanaelastic-master
  name: kibanaelastic-master-kibanaelastic-master-0
  namespace: elasticsearch
  resourceVersion: "360956"
  selfLink: /api/v1/namespaces/elasticsearch/persistentvolumeclaims/kibanaelastic-master-kibanaelastic-master-0
  uid: 22b1c23a-312e-4b56-a0bb-17f2da372509
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 30Gi
  storageClassName: slow
  volumeMode: Filesystem
status:
  phase: Pending

So what else should i check?
Any clue ?

2
can you provide the YAML of your storage class? Also, you mentioned that the PVC is using the default storage class, but it is using the storage class named "slow" (spec.storageClassName: slow)Patrick W
Don't mind i found the error. i was using the wrong provisioner in the default storage class. And this provisioner was not configured. I'm curious how i could get this error more visible and undestandable.Softlion
Glad to hear that you have found the issue. However it would be good if you can post the last comment as an answer for this thread (to follow stackoverflow "guidelines" )Nick

2 Answers

0
votes

Don't mind i found the error. i was using the wrong provisionner in the default storage class. And this provisionner was not configured. I'm curious how i could get this error more visible and understandable.

0
votes

Let me ad my 5 cents here :).

I can't understand where kubernetes will allocate the PV on disks, i never configured it anywhere. And it's not in the StorageClass too.

Assuming that we are using GCP, that whole thing with Storage Classes, PV claims is expected to work in a following way:

  1. StorageClass
  2. PersistentVolumeClaim (on that Storage Class)
  3. volumes: field in Deployment (for example)

Example:

$ cat minio-storage-class.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: minio-disk
provisioner: kubernetes.io/gce-pd
parameters:
    type: pd-standard
reclaimPolicy: Delete
volumeBindingMode: Immediate

Creates a new Storage class in GCP.

Then we claim our Persistent Volume with that storage class parameters:

$ cat minio-pvc.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: minio-claim
  namespace: default
  annotations:
    volume.beta.kubernetes.io/storage-class: minio-disk
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

Important thing is that we have specified minio-disk storage class in annotations . Upon that we can see, that the minio-claim PVC was created and Phase is "bound" .

enter image description here

after that we can use it in our Deployment (I have omitted 3/4 of the file for clarity ):

cat minio-deploy.yaml 
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: minio
  ...
spec:
  ...
  template:
    ...
    spec:
      volumes:
      - name: storage
        persistentVolumeClaim:
          claimName: minio-claim
     ...

I'm curious how i could get this error more visible and understandable.

If you see your PVC as "Pending", you could troubleshot it with: $ kubectl describe persistentvolumeclaim <your-pvc-name>

In my case it looks like :

$ kubectl describe persistentvolumeclaim minio-claim-broken

Name:          minio-claim-broken
Namespace:     default
StorageClass:  minio-disk-1
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-class: minio-disk-1
               volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/gce-pd
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Mounted By:    <none>
Events:
  Type     Reason              Age                   From                         Message
  ----     ------              ----                  ----                         -------
  Warning  ProvisioningFailed  61s (x10 over 6m47s)  persistentvolume-controller  Failed to provision volume with StorageClass "minio-disk-1": invalid option "diskformat" for volume plugin kubernetes.io/gce-pd

That gives an insight on what went wrong with the creation (invalid option in Storage Class definition)

Hope that helps :)