0
votes

I have created a GCE Disk and I created a Persistent Volume with that Disk and claimed the PV successfully. But when I deploy the pod, it gives me an error. Below are the details.

$ gcloud compute disks list

NAME                    LOCATION                LOCATION_SCOPE  SIZE_GB  TYPE         STATUS
test-kubernetes-disk  asia-southeast1-a  zone            200      pd-standard  READY

pod.yml

apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
    - name: myfrontend
      image: nginx
      volumeMounts:
      - mountPath: /test-pd
        name: mypd
  volumes:
    - name: mypd
      persistentVolumeClaim:
        claimName: myclaim

pv.yml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-gce
spec:
  accessModes:
    - ReadWriteOnce
  capacity:
    storage:  200Gi
  storageClassName: fast
  gcePersistentDisk:
    pdName: test-kubernetes-disk
    fsType: ext4

pvc.yml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: myclaim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage:  1Gi
  storageClassName: fast

Below are the events of the pod.

Events:
  Type     Reason       Age   From               Message
  ----     ------       ----  ----               -------
  Normal   Scheduled    12m   default-scheduler  Successfully assigned default/mypod to worker-0
  Warning  FailedMount  9m6s  kubelet, worker-0  MountVolume.SetUp failed for volume "pv-gce" : mount of disk /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce failed: mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce --scope -- mount  -o bind /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/test-kubernetes-disk /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce
Output: Running scope as unit: run-r4b3f35b2b0354f26ba64375388054054.scope
mount: /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce: special device /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/test-kubernetes-disk does not exist.
  Warning  FailedMount  6m52s  kubelet, worker-0  MountVolume.SetUp failed for volume "pv-gce" : mount of disk /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce failed: mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce --scope -- mount  -o bind /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/test-kubernetes-disk /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce
Output: Running scope as unit: run-ra8fb00a02d6145fa9c54e88adf81e942.scope
mount: /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce: special device /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/test-kubernetes-disk does not exist.
  Warning  FailedMount  5m52s (x2 over 8m9s)  kubelet, worker-0  Unable to attach or mount volumes: unmounted volumes=[mypd], unattached volumes=[default-token-s82xz mypd]: timed out waiting for the condition
  Warning  FailedMount  4m35s                 kubelet, worker-0  MountVolume.SetUp failed for volume "pv-gce" : mount of disk /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce failed: mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce --scope -- mount  -o bind /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/test-kubernetes-disk /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce
Output: Running scope as unit: run-rf86d063bc5e44878831dc2734575e9cf.scope
mount: /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce: special device /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/test-kubernetes-disk does not exist.
  Warning  FailedMount  2m18s  kubelet, worker-0  MountVolume.SetUp failed for volume "pv-gce" : mount of disk /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce failed: mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce --scope -- mount  -o bind /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/test-kubernetes-disk /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce
Output: Running scope as unit: run-rb9edbe05f62449d0aa0d5ed8bedafb29.scope
mount: /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce: special device /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/test-kubernetes-disk does not exist.
  Warning  FailedMount         80s (x3 over 10m)  kubelet, worker-0        Unable to attach or mount volumes: unmounted volumes=[mypd], unattached volumes=[mypd default-token-s82xz]: timed out waiting for the condition
  Warning  FailedAttachVolume  8s (x5 over 11m)   attachdetach-controller  AttachVolume.NewAttacher failed for volume "pv-gce" : Failed to get GCE GCECloudProvider with error <nil>
  Warning  FailedMount         3s                 kubelet, worker-0        MountVolume.SetUp failed for volume "pv-gce" : mount of disk /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce failed: mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce --scope -- mount  -o bind /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/test-kubernetes-disk /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce
Output: Running scope as unit: run-r5290d9f978834d4681966a40c3f535fc.scope
mount: /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce: special device /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/test-kubernetes-disk does not exist.

kubectl get pv

NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM             STORAGECLASS   REASON   AGE
pv-gce   200Gi      RWO            Retain           Bound    default/myclaim   fast                    23m

kubectl get pvc

NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
myclaim   Bound    pv-gce   200Gi      RWO            fast           22m

Please kindly help with this.

4
The error is saying that msales-kubernetes-disk is not a valid persistent disk name. - Kamol Hasan

4 Answers

0
votes

you are missing the claimRef spec in pv. you need to add the claimRef field in pv which is going to help you to bound the pv with desired pvc.

and also make sure that pv and pod are in same zone. GCE Persistent Disks are a zonal resource, so the pod can only request a Persistent Disk that is in its zone.

try to apply them :

pv.yml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-gce
spec:
  claimRef:
    name: myclaim
  accessModes:
    - ReadWriteOnce
  capacity:
    storage:  200Gi
  storageClassName: fast
  gcePersistentDisk:
    pdName: msales-kubernetes-disk
    fsType: ext4
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: topology.kubernetes.io/zone
          operator: In
          values:
          - australia-southeast1-a
        - key: topology.kubernetes.io/region
          operator: In
          values:
          - australia-southeast1

pvc.yml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: myclaim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage:  200Gi
  storageClassName: fast

storage class should be like this:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: fast
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-standard
  fstype: ext4
  replication-type: none

and pod should be like this

apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: topology.kubernetes.io/zone
            operator: In
            values:
            - australia-southeast1-a
          - key: topology.kubernetes.io/region
            operator: In
            values:
            - australia-southeast1
  containers:
    - name: myfrontend
      image: nginx
      volumeMounts:
      - mountPath: /test-pd
        name: mypd
  volumes:
    - name: mypd
      persistentVolumeClaim:
        claimName: myclaim
0
votes

can you retry ? just delete everything.

follow this steps:

gcloud compute disks create --size=200GB --zone=australia-southeast1-a msales-kubernetes-disk

then apply this one

pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: test-pd
spec:
  containers:
  - name: myfrontend
      image: nginx
      volumeMounts:
      - mountPath: /test-pd
        name: mypd
  volumes:
  - name: mypd
    # This GCE PD must already exist.
    gcePersistentDisk:
      pdName:  msales-kubernetes-disk
      fsType: ext4

here you don't need to bother about pv and pvc

0
votes

@Emon, here is the output for disk describe.

    $ gcloud compute disks describe test-kubernetes-disk
creationTimestamp: '2021-01-19T18:03:01.982-08:00'
id: '5437882943050232250'
kind: compute#disk
labelFingerprint: 42WmSpB8rSM=
lastAttachTimestamp: '2021-01-19T21:41:26.170-08:00'
lastDetachTimestamp: '2021-01-19T21:46:38.814-08:00'
name: test-kubernetes-disk
physicalBlockSizeBytes: '4096'
selfLink: https://www.googleapis.com/compute/v1/projects/test-01/zones/asia-southeast1-a/disks/test-kubernetes-disk
sizeGb: '200'
status: READY
type: https://www.googleapis.com/compute/v1/projects/test-01/zones/asia-southeast1-a/diskTypes/pd-standard
zone: https://www.googleapis.com/compute/v1/projects/test-01/zones/asia-southeast1-a
0
votes

@Emon, Still the issue exists. I just deleted everything. Deleted the disk, pods, pv, pvc and storageclass. Just executed the provided pod.yml. And created the new disk.

$ kubectl describe pod test-pd
Name:         test-pd
Namespace:    default
Priority:     0
Node:         worker-0/10.240.0.20
Start Time:   Thu, 21 Jan 2021 06:18:00 +0000
Labels:       <none>
Annotations:  Status:  Pending
IP:
IPs:          <none>
Containers:
  myfrontend:
    Container ID:
    Image:          nginx
    Image ID:
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /test-pd from mypd (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-s82xz (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  mypd:
    Type:       GCEPersistentDisk (a Persistent Disk resource in Google Compute Engine)
    PDName:     test-kubernetes-disk
    FSType:     ext4
    Partition:  0
    ReadOnly:   false
  default-token-s82xz:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-s82xz
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason              Age   From                     Message
  ----     ------              ----  ----                     -------
  Normal   Scheduled           59s   default-scheduler        Successfully assigned default/test-pd to worker-0
  Warning  FailedAttachVolume  8s    attachdetach-controller  AttachVolume.NewAttacher failed for volume "mypd" : Failed to get GCE GCECloudProvider with error <nil>

BTW are you sure that I don't want to specify the cloud provider flag?