I am trying to deploy mongodb to my kubernetes cluster. It automatically creates a pvc and pv based on the storage class name i specify. However the pod is stuck on ContainerCreating
because of the following error:
MountVolume.MountDevice failed for volume "pvc-f88bdca6-7794-455a-872f-8230f1ce295d" : mount failed: exit status 32 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-2a/vol-087b3e95d1aa21e03 --scope -- mount -t xfs -o debug,defaults /dev/xvdbq /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-2a/vol-087b3e95d1aa21e03 Output: Running scope as unit run-4113.scope. mount: /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-2a/vol-087b3e95d1aa21e03: wrong fs type, bad option, bad superblock on /dev/nvme1n1, missing codepage or helper program, or other error.
I'm not sure what to do as this is pretty consistant no matter how many times i uninstall and resinstall the helm chart.
kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4", GitCommit:"e87da0bd6e03ec3fea7933c4b5263d151aafd07c", GitTreeState:"clean", BuildDate:"2021-02-18T16:12:00Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19+", GitVersion:"v1.19.6-eks-49a6c0", GitCommit:"49a6c0bf091506e7bafcdb1b142351b69363355a", GitTreeState:"clean", BuildDate:"2020-12-23T22:10:21Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
storage class
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: mongodbstorage
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
fsType: xfs
reclaimPolicy: Retain
allowVolumeExpansion: true
mountOptions:
- debug
volumeBindingMode: WaitForFirstConsumer
kubectl describe pod mongodb-prod-0 -n mongodb
Name: mongodb-prod-0
Namespace: mongodb
Priority: 0
Node: ip-10-0-4-244.us-east-2.compute.internal/10.0.4.244
Start Time: Sat, 24 Apr 2021 20:03:06 +0100
Labels: app.kubernetes.io/component=mongodb
app.kubernetes.io/instance=mongodb-prod
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=mongodb
controller-revision-hash=mongodb-prod-58c557d4fc
helm.sh/chart=mongodb-10.12.5
statefulset.kubernetes.io/pod-name=mongodb-prod-0
Annotations: kubernetes.io/psp: eks.privileged
Status: Pending
IP:
IPs: <none>
Controlled By: StatefulSet/mongodb-prod
Containers:
mongodb:
Container ID:
Image: docker.io/bitnami/mongodb:4.4.5-debian-10-r0
Image ID:
Port: 27017/TCP
Host Port: 0/TCP
Command:
/scripts/setup.sh
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Liveness: exec [mongo --disableImplicitSessions --eval db.adminCommand('ping')] delay=30s timeout=5s period=10s #success=1 #failure=6
Readiness: exec [bash -ec mongo --disableImplicitSessions $TLS_OPTIONS --eval 'db.hello().isWritablePrimary || db.hello().secondary' | grep -q 'true'
] delay=5s timeout=5s period=10s #success=1 #failure=6
Environment:
BITNAMI_DEBUG: false
MY_POD_NAME: mongodb-prod-0 (v1:metadata.name)
MY_POD_NAMESPACE: mongodb (v1:metadata.namespace)
K8S_SERVICE_NAME: mongodb-prod-headless
MONGODB_INITIAL_PRIMARY_HOST: mongodb-prod-0.$(K8S_SERVICE_NAME).$(MY_POD_NAMESPACE).svc.cluster.local
MONGODB_REPLICA_SET_NAME: rs0
MONGODB_ROOT_PASSWORD: <set to the key 'mongodb-root-password' in secret 'mongodb-prod'> Optional: false
MONGODB_REPLICA_SET_KEY: <set to the key 'mongodb-replica-set-key' in secret 'mongodb-prod'> Optional: false
ALLOW_EMPTY_PASSWORD: no
MONGODB_SYSTEM_LOG_VERBOSITY: 0
MONGODB_DISABLE_SYSTEM_LOG: no
MONGODB_DISABLE_JAVASCRIPT: no
MONGODB_ENABLE_IPV6: no
MONGODB_ENABLE_DIRECTORY_PER_DB: no
Mounts:
/bitnami/mongodb from datadir (rw)
/scripts/setup.sh from scripts (rw,path="setup.sh")
/var/run/secrets/kubernetes.io/serviceaccount from mongodb-prod-token-4kjjm (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
datadir:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: datadir-mongodb-prod-0
ReadOnly: false
scripts:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: mongodb-prod-scripts
Optional: false
mongodb-prod-token-4kjjm:
Type: Secret (a volume populated by a Secret)
SecretName: mongodb-prod-token-4kjjm
Optional: false
QoS Class: BestEffort
Node-Selectors: geeiq/node-type=ops
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 18m default-scheduler Successfully assigned mongodb/mongodb-prod-0 to ip-10-0-4-244.us-east-2.compute.internal
Normal SuccessfulAttachVolume 18m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-f88bdca6-7794-455a-872f-8230f1ce295d"
Warning FailedMount 18m kubelet MountVolume.MountDevice failed for volume "pvc-f88bdca6-7794-455a-872f-8230f1ce295d" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-2a/vol-087b3e95d1aa21e03 --scope -- mount -t xfs -o debug,defaults /dev/xvdbq /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-2a/vol-087b3e95d1aa21e03
Output: Running scope as unit run-4113.scope.
mount: /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-2a/vol-087b3e95d1aa21e03: wrong fs type, bad option, bad superblock on /dev/nvme1n1, missing codepage or helper program, or other error.
Warning FailedMount 18m kubelet MountVolume.MountDevice failed for volume "pvc-f88bdca6-7794-455a-872f-8230f1ce295d" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-2a/vol-087b3e95d1aa21e03 --scope -- mount -t xfs -o debug,defaults /dev/xvdbq /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-2a/vol-087b3e95d1aa21e03
Output: Running scope as unit run-4182.scope.
mount: /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-2a/vol-087b3e95d1aa21e03: wrong fs type, bad option, bad superblock on /dev/nvme1n1, missing codepage or helper program, or other error.
Warning FailedMount 18m kubelet MountVolume.MountDevice failed for volume "pvc-f88bdca6-7794-455a-872f-8230f1ce295d" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-2a/vol-087b3e95d1aa21e03 --scope -- mount -t xfs -o debug,defaults /dev/xvdbq /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-2a/vol-087b3e95d1aa21e03
Output: Running scope as unit run-4256.scope.
mount: /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-2a/vol-087b3e95d1aa21e03: wrong fs type, bad option, bad superblock on /dev/nvme1n1, missing codepage or helper program, or other error.
Warning FailedMount 18m kubelet MountVolume.MountDevice failed for volume "pvc-f88bdca6-7794-455a-872f-8230f1ce295d" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-2a/vol-087b3e95d1aa21e03 --scope -- mount -t xfs -o debug,defaults /dev/xvdbq /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-2a/vol-087b3e95d1aa21e03
Output: Running scope as unit run-4297.scope.
mount: /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-2a/vol-087b3e95d1aa21e03: wrong fs type, bad option, bad superblock on /dev/nvme1n1, missing codepage or helper program, or other error.
Warning FailedMount 18m kubelet MountVolume.MountDevice failed for volume "pvc-f88bdca6-7794-455a-872f-8230f1ce295d" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-2a/vol-087b3e95d1aa21e03 --scope -- mount -t xfs -o debug,defaults /dev/xvdbq /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-2a/vol-087b3e95d1aa21e03
Output: Running scope as unit run-4458.scope.
mount: /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-2a/vol-087b3e95d1aa21e03: wrong fs type, bad option, bad superblock on /dev/nvme1n1, missing codepage or helper program, or other error.
Warning FailedMount 18m kubelet MountVolume.MountDevice failed for volume "pvc-f88bdca6-7794-455a-872f-8230f1ce295d" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-2a/vol-087b3e95d1aa21e03 --scope -- mount -t xfs -o debug,defaults /dev/xvdbq /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-2a/vol-087b3e95d1aa21e03
Output: Running scope as unit run-4562.scope.
mount: /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-2a/vol-087b3e95d1aa21e03: wrong fs type, bad option, bad superblock on /dev/nvme1n1, missing codepage or helper program, or other error.
Warning FailedMount 17m kubelet MountVolume.MountDevice failed for volume "pvc-f88bdca6-7794-455a-872f-8230f1ce295d" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-2a/vol-087b3e95d1aa21e03 --scope -- mount -t xfs -o debug,defaults /dev/xvdbq /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-2a/vol-087b3e95d1aa21e03
Output: Running scope as unit run-4835.scope.
mount: /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-2a/vol-087b3e95d1aa21e03: wrong fs type, bad option, bad superblock on /dev/nvme1n1, missing codepage or helper program, or other error.
Warning FailedMount 17m kubelet MountVolume.MountDevice failed for volume "pvc-f88bdca6-7794-455a-872f-8230f1ce295d" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-2a/vol-087b3e95d1aa21e03 --scope -- mount -t xfs -o debug,defaults /dev/xvdbq /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-2a/vol-087b3e95d1aa21e03
Output: Running scope as unit run-5281.scope.
mount: /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-2a/vol-087b3e95d1aa21e03: wrong fs type, bad option, bad superblock on /dev/nvme1n1, missing codepage or helper program, or other error.
Warning FailedMount 5m30s (x2 over 16m) kubelet Unable to attach or mount volumes: unmounted volumes=[datadir], unattached volumes=[scripts mongodb-prod-token-4kjjm datadir]: timed out waiting for the condition
Warning FailedMount 3m12s (x3 over 10m) kubelet Unable to attach or mount volumes: unmounted volumes=[datadir], unattached volumes=[datadir scripts mongodb-prod-token-4kjjm]: timed out waiting for the condition
Warning FailedMount 56s (x11 over 16m) kubelet (combined from similar events): Unable to attach or mount volumes: unmounted volumes=[datadir], unattached volumes=[datadir scripts mongodb-prod-token-4kjjm]: timed out waiting for the condition
kubectl get pods,svc,pvc,pv -o wide --namespace mongodb
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/mongodb-prod-0 0/1 ContainerCreating 0 20m <none> ip-10-0-4-244.us-east-2.compute.internal <none> <none>
pod/mongodb-prod-arbiter-0 1/1 Running 5 20m 10.0.4.132 ip-10-0-4-244.us-east-2.compute.internal <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/mongodb-prod-0-external NodePort 172.20.91.18 <none> 27017:30001/TCP 20m app.kubernetes.io/component=mongodb,app.kubernetes.io/instance=mongodb-prod,app.kubernetes.io/name=mongodb,statefulset.kubernetes.io/pod-name=mongodb-prod-0
service/mongodb-prod-1-external NodePort 172.20.202.43 <none> 27017:30002/TCP 20m app.kubernetes.io/component=mongodb,app.kubernetes.io/instance=mongodb-prod,app.kubernetes.io/name=mongodb,statefulset.kubernetes.io/pod-name=mongodb-prod-1
service/mongodb-prod-arbiter-headless ClusterIP None <none> 27017/TCP 20m app.kubernetes.io/component=arbiter,app.kubernetes.io/instance=mongodb-prod,app.kubernetes.io/name=mongodb
service/mongodb-prod-headless ClusterIP None <none> 27017/TCP 20m app.kubernetes.io/component=mongodb,app.kubernetes.io/instance=mongodb-prod,app.kubernetes.io/name=mongodb
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE
persistentvolumeclaim/datadir-mongodb-prod-0 Bound pvc-f88bdca6-7794-455a-872f-8230f1ce295d 100Gi RWO mongodbstorage 20m Filesystem
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE VOLUMEMODE
persistentvolume/pvc-f88bdca6-7794-455a-872f-8230f1ce295d 100Gi RWO Retain Bound mongodb/datadir-mongodb-prod-0 mongodbstorage 20m Filesystem
Update:
kubectl describe pv pvc-30f3ca78-134b-4b4d-bac9-385a71a6f7e0
Name: pvc-30f3ca78-134b-4b4d-bac9-385a71a6f7e0
Labels: failure-domain.beta.kubernetes.io/region=us-east-2
failure-domain.beta.kubernetes.io/zone=us-east-2c
Annotations: kubernetes.io/createdby: aws-ebs-dynamic-provisioner
pv.kubernetes.io/bound-by-controller: yes
pv.kubernetes.io/provisioned-by: kubernetes.io/aws-ebs
Finalizers: [kubernetes.io/pv-protection]
StorageClass: mongodbstorage
Status: Bound
Claim: mongodb/datadir-mongodb-prod-0
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 100Gi
Node Affinity:
Required Terms:
Term 0: failure-domain.beta.kubernetes.io/zone in [us-east-2c]
failure-domain.beta.kubernetes.io/region in [us-east-2]
Message:
Source:
Type: AWSElasticBlockStore (a Persistent Disk resource in AWS)
VolumeID: aws://us-east-2c/vol-08aebae8e0d675c4d
FSType: xfs
Partition: 0
ReadOnly: false
Events: <none>
WaitForFirstConsumer
because when you apply a nodeSelector there is a chance that the pod the node is running on and the pvc region is not the same. kubernetes.io/docs/concepts/storage/storage-classes/… My understanding of this section of the documentationPersistentVolumes will be selected or provisioned conforming to the topology that is specified by the Pod's scheduling constraints
helps prevent this mismatch.In my case it did, both the pvc and pod are on the same region after applying this – Kaycustom volume provisioner
I don't know what this is, can you provide a link to some documentation? – Kay