0
votes

I am trying to deploy a persistentvolume for 3 pods to work on and i want to use the cluster's node storage i.e. not an external storage like ebs spin off.

To achieve the above i did the following experiment's -

1) I applied only the below PVC resource defined below -

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  creationTimestamp: null
  labels:
    io.kompose.service: pv1
  name: pv1
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
status: {}

This spin's up a storage set by default storageclass, which in my case was digital ocean's volume. So it created a 1Gi volume.

2) Created a PV resource and PVC resource like below -

apiVersion: v1
kind: PersistentVolume
metadata:
  name: task-pv-volume
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data"

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  creationTimestamp: null
  labels:
    io.kompose.service: pv1
  name: pv1
spec:
  storageClassName: manual
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
status: {}

Post this i see my claim is bound.

    pavan@p1:~$ kubectl get pvc
    NAME        STATUS   VOLUME           CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    pv1   Bound    task-pv-volume   10Gi       RWO            manual         2m5s
    pavan@p1:~$ kubectl get pv
    NAME             CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM               STORAGECLASS   REASON   AGE
    task-pv-volume   10Gi       RWO            Retain           Bound    default/pv1   manual                  118m
pavan@p1:~$ kubectl describe pvc
Name:          pv1
Namespace:     default
StorageClass:  manual
Status:        Bound
Volume:        task-pv-volume
Labels:        io.kompose.service=pv1
Annotations:   kubectl.kubernetes.io/last-applied-configuration:
                 {"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"creationTimestamp":null,"labels":{"io.kompose.service":"mo...
               pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      10Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Mounted By:    <none>
Events:
  Type     Reason              Age                 From                         Message
  ----     ------              ----                ----                         -------
  Warning  ProvisioningFailed  28s (x8 over 2m2s)  persistentvolume-controller  storageclass.storage.k8s.io "manual" not found

Below are my questions that i am hoping to get answers/pointers to -

  1. The above warning, storage class could not be found, do i need to create one? If so, can you tell me why and how? or any pointer. (Somehow this link misses to state that - https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/)

  2. Notice the PV has storage capacity of 10Gi and PVC with request capacity of 1Gi, but still PVC was bound with 10Gi capacity? Can't i share the same PV capacity with other PVCs?

For question 2) If i have to create different PVs for different PVC with the required capacity, do i have to create storageclass as-well? Or same storage class and use selectors to select corresponding PV?

2
Is the issue still actual?Black_Bacardi

2 Answers

3
votes

I was trying to reproduce all behavior to answer all your questions. However, I don't have access to DigitalOcean, so I tested it on GKE.

The above warning, storage class could not be found, do i need to create one?

According to the documentation and best practices, it is highly recommended to create a storageclass and later create PV / PVC based on it. However, there is something called "manual provisioning". Which you did in this case.

Manual provisioning is when you need to manually create a PV first, and then a PVC with matching spec.storageClassName: field. Examples:

  • If you create a PVC without default storageclass, PV and storageClassName parameter (afaik kubeadm is not providing default storageclass) - PVC will be stuck on Pending with event: no persistent volumes available for this claim and no storage class is set.
  • If you create a PVC with default storageclass setup on cluster but without storageClassName parameter it will create it based on default storageclass.
  • If you create a PVC with storageClassName parameter (somewhere in the Cloud, Minikube or kubeadm) - PVC will also be Pending with the warning: storageclass.storage.k8s.io "manual" not found. However, if you create PV with the same storageClassName parameter, it will be bound in a while.

Example:

$ kubectl get pv,pvc
NAME                              CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
persistentvolume/task-pv-volume   10Gi       RWO            Retain           Available           manual                  4s

NAME                        STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/pv1   Pending                                      manual         4m12s

...

kubectl get pv,pvc
NAME                              CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM         STORAGECLASS   REASON   AGE
persistentvolume/task-pv-volume   10Gi       RWO            Retain           Bound    default/pv1   manual                  9s

NAME                        STATUS   VOLUME           CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/pv1   Bound    task-pv-volume   10Gi       RWO            manual         4m17s

The disadvantage of manual provisioning is that you have to create PV for each PVC. If you use storageclass you can just create PVC.

If so, can you tell me why and how? or any pointer.

You can use documentation examples or check here. As you are using cloud with default storageclass you can export it to yaml by:
$ kubectl get sc -oyaml >> storageclass.yaml. Or if you have more than one, you have to specify which one. Names of storageclass can be obtained by
$ kubectl get sc. Later you can refer to K8s API to customize your storageclass.

Notice the PV has storage capacity of 10Gi and PVC with request capacity of 1Gi, but still PVC was bound with 10Gi capacity?

You created manually a PV with 10Gi and the PVC requested 1Gi. As PVC and PV are bounding 1:1, PVC searched for a PV which meets all conditions and bound to it. PVC (pv1) requested 1Gi and the PV (task-pv-volume) meet those requirements so Kubernetes bound them. Unfortunately much of the space was wasted in this case.

Can't i share the same PV capacity with other PVCs

Unfortunately, you cannot bound 2 PVC to 1 PV as the relationship between PVC and PV is 1:1, but you can configure many pods/deployments to use the same PVC.

I can advise you to look at this stackoverflow case as it explains very well AccessMode specifics.

If i have to create different PVs for different PVC with the required capacity, do i have to create storageclass as-well? Or same storage class and use selectors to select corresponding PV?

As I mentioned before, if you create PV manually with a specific size and a PVC bounded to it, which request less, the extra space will be wasted. So, you have to create PV and PVC with the same resource request, or let storageclass adjust the storage based on PVC request.

0
votes
  1. Yes, you have to create storage class, check, but I guess digital-ocean provide default storage class, you can check it with: kubectl get storageclasses

  2. You can share one PV, but only in read-only access, if you need write access for all pods you have to create multiple PV, check