0
votes

Currently I am trying to have volume persistence for my MYSQL database using Kubernetes with Kubeadm. The environment is based on an amazon EC2 instance using EBS storage disks.

As you can see below a storage class, a persistent volume as well as a persistent volume claim have been implemented in order to have a mysql persistence. However an error occurs when I try to deploy the mysql pod (on the attached image).

mysql-pv.yml:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: standard
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
reclaimPolicy: Retain
mountOptions:
  - debug
volumeBindingMode: Immediate
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mysql-pv
  labels:
    type: amazonEBS
spec:
  capacity:
    storage: 5Gi
  storageClassName: standard
  accessModes:
    - ReadWriteOnce
  awsElasticBlockStore:
    volumeID: vol-ID
    fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
  storageClassName: standard
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi

mysql.yml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mysql
spec:
  selector:
    matchLabels:
      app: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
        - image: mysql:5.7.30
          name: mysql
          env:
            - name: MYSQL_ROOT_PASSWORD
              value: MYPASSWORD
          ports:
            - containerPort: 3306
              name: mysql
          volumeMounts:
            - name: mysql-persistent-storage
              mountPath: /var/lib/mysql
      volumes:
        - name: mysql-persistent-storage
          persistentVolumeClaim:
            claimName: mysql-pv-claim
---
apiVersion: v1
kind: Service
metadata:
  name: mysql
spec:
  type: NodePort
  ports:
    - port: 3306
      targetPort: 3306
      nodePort: 31306
  selector:
    app: mysql

This is my mysql pod description:

Name:           mysql-5c9788fc65-jq2nh
Namespace:      default
Priority:       0
Node:           ip-172-31-31-210/172.31.31.210
Start Time:     Sat, 23 May 2020 12:19:24 +0000
Labels:         app=mysql
                pod-template-hash=5c9788fc65
Annotations:    <none>
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  ReplicaSet/mysql-5c9788fc65
Containers:
  mysql:
    Container ID:   
    Image:          mysql:5.7.30
    Image ID:       
    Port:           3306/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:
      MYSQL_ROOT_PASSWORD:  MYPASS
    Mounts:
      /data/ from mysql-persistent-storage (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-cshk2 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  mysql-persistent-storage:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  mysql-pv-claim
    ReadOnly:   false
  default-token-cshk2:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-cshk2
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s

Here is the error I get:

Events:
Type     Reason       Age        From                       Message
----     ------       ----       ----                       -------
Normal   Scheduled    <unknown>  default-scheduler          
Successfully assigned default/mysql-5c9788fc65-jq2nh to ip-172-31-31-210
Warning  FailedMount  39m        kubelet, ip-172-31-31-210  MountVolume.SetUp failed for volume "mysql-pv" : mount failed: exit status 32

Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/29d5cee7-da11-4a0c-b5aa-e262f919d1ba/volumes/kubernetes.io~aws-ebs/mysql-pv --scope -- mount  -o bind /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/vol-06212746d87534157 /var/lib/kubelet/pods/29d5cee7-da11-4a0c-b5aa-e262f919d1ba/volumes/kubernetes.io~aws-ebs/mysql-pv
Output: Running scope as unit: run-r11fefbbda1d241c2985931d3adaaa969.scope
mount: /var/lib/kubelet/pods/29d5cee7-da11-4a0c-b5aa-e262f919d1ba/volumes/kubernetes.io~aws-ebs/mysql-pv: special device /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/vol-06212746d87534157 does not exist.
Warning  FailedMount  39m  kubelet, ip-172-31-31-210  MountVolume.SetUp failed for volume "mysql-pv" : mount failed: exit status 32

Someone can help me ?

2
Can you please check here about the steps how the mount happens in case of EBS github.com/kubernetes/kubernetes/issues/86064. Also would need you to provide further info on kubectl describe pvc mysql-pv-claim and kubectl describe pv mysql-pv. You need to check if they were created successfully. Also please go inside the node and run lsblk to check how the mount stage is.redzack
Also if you can share the logs for kubelet /var/log/kubelet log and /var/log/kube-controller-manager.logredzack
In the mount key, it says the special device doesn't exist. That means the pvc doesn't get bounded with pv. Hence it fails.redzack

2 Answers

1
votes

Check for the state of PV and PVC if the PVC is in bounded state or not.

kubectl describe pvc mysql-pv-claim kubectl describe pv mysql-pv Do you have the EBS CSI driver installed?

Other reason can be, I think you missed adding the option of --cloud-provider=aws, this is required by CCM for the nodes. Check out the similar issue.

The following link has all the IAM permissions and a working example on how to create and mount an EBS volume in Kubernetes from docs on cluster configuration for using EBS.

With a kubeadm configuration, configuration is defined in: /var/lib/kubelet/config.yaml and /var/lib/kubelet/kubeadm-flags.env

If the cluster was deployed using kubeadm then, define environment variable on all nodes in kubeadm-flags.env To resolve this manually you need to add the --cloud-provider=aws tag to the kubeadm-flags.env and restarted the services, which will resolve the issue:

systemctl daemon-reload && systemctl restart kubelet or provide the following configuration for kubeadm. Change openstack to AWS in your case.

Check the following blog for better understanding.

kind: InitConfiguration
nodeRegistration:
  kubeletExtraArgs:
    cloud-provider: "aws"
    cloud-config: "/etc/kubernetes/cloud.conf"
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
  extraArgs:
    cloud-provider: "aws"
    #cloud-config: "/etc/kubernetes/cloud.conf"
  extraVolumes:
  - name: cloud
    hostPath: "/etc/kubernetes/cloud.conf"
    mountPath: "/etc/kubernetes/cloud.conf"
controllerManager:
  extraArgs:
    cloud-provider: "aws"
    #cloud-config: "/etc/kubernetes/cloud.conf"
  extraVolumes:
  - name: cloud
    hostPath: "/etc/kubernetes/cloud.conf"
    mountPath: "/etc/kubernetes/cloud.conf"```

0
votes

Here is some additional information:

kubectl describe pvc mysql-pv-claim :

Name:          mysql-pv-claim
Namespace:     default
StorageClass:  standard
Status:        Bound 
Volume:        mysql-pv
Labels:        <none>
Annotations:   pv.kubernetes.io/bind-completed: yes
           pv.kubernetes.io/bound-by-controller: yes
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      5Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Mounted By:    mysql-5c9788fc65-jq2nh
Events:        <none>

kubectl describe pv mysql-pv :

Name:            mysql-pv
Labels:          type=amazonEBS
Annotations:     pv.kubernetes.io/bound-by-controller: yes
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    standard
Status:          Bound
Claim:           default/mysql-pv-claim
Reclaim Policy:  Retain
Access Modes:    RWO
VolumeMode:      Filesystem
Capacity:        5Gi
Node Affinity:   <none>
Message:         
Source:
    Type:       AWSElasticBlockStore (a Persistent Disk resource in AWS)
    VolumeID:   vol-06212746d87534157
    FSType:     ext4
    Partition:  0
    ReadOnly:   false
Events:         <none>

lsblk :

NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
loop0         7:0    0   18M  1 loop /snap/amazon-ssm-agent/1566
loop1         7:1    0 93.9M  1 loop /snap/core/9066
loop2         7:2    0 93.8M  1 loop /snap/core/8935
nvme0n1     259:0    0   10G  0 disk 
nvme1n1     259:1    0   15G  0 disk 
└─nvme1n1p1 259:2    0   15G  0 part /

I want to use nvme0n1.

I don't have kubelet and kube-controller-manager.log file log.