1
votes

I am trying to migrate my deployment from the Minikube platform to the KOPS cluster in AWS. In my deployment, I have multiple pods that share the same pvc(persistent volume claim).

Therefore, accessing ebs pvc from different pods in the KOPS cluster is having problems when those pods are running on different nodes(different instances). For eg - I have 3 pods and 2 nodes. Assume pod1 is running on node1 and pod2&pod3 are running on node2. pod2&pod3 will not be able to attach ebs pvc after pod1 is attached to ebs pvc.

How to make ebs pvc accessible from different pods running on different nodes in the kops cluster in AWS?

volume.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: media-volume
spec:
  storageClassName: gp2-manual
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  awsElasticBlockStore:
    fsType: ext4
    volumeID: <volumeID>
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: media-volume-claim
spec:
  storageClassName: gp2-manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi
1
Could you try to change your access mode from ReadWriteOnce to ReadWriteMany? As mentioned in the documentation ReadWriteOnce -- the volume can be mounted as read-write by a single node.Jakub
EBS pvc cannot use ReadWriteMany access mode because EBS only allow ReadWriteOnce access mode. Currently, I am using EFS, but my boss prefers to use EBS rather than EFS.Kyaw Min Thu L
Sorry, my mistake. Continuing this topic I would suggest EFS instead, but as your boss prefers EBS maybe GlusterFS could help here? It was suggested by @Harsh Manvar here. There is tutorial about it on medium.Jakub

1 Answers

1
votes

The quick answer here would be to use EFS with ReadWriteMany access mode instead of EBS, as EBS allow only ReadWriteOnce access mode.

But as @KKyaw Min Thu L mentioned in comments

Currently, I am using EFS, but my boss prefers to use EBS rather than EFS.

As a workaround for this I would sugget to use GlusterFS, it was suggested by @Harsh Manvar here.

As you mention EBS volume with affinity & node selector will stop scalability however with EBS only ReadWriteOnce will work.

Sharing my experience, if you are doing many operations on the file system and frequently pushing & fetching files it might could be slow with EFS which can degrade application performance. operation rate on EFS is slow.

However, you can use GlusterFs in back it will be provisioning EBS volume. GlusterFS also support ReadWriteMany and it will be faster compared to EFS as it's block storage (SSD).

There is tutorial about this on medium.

GlusterFS is a connector based storage system, i.e. by itself gluster doesnt provide storage, but it connects to a durable storage and extrapolates storage to make it seamless for K8 pods.

The high level topology is as described in the diagram below where one EBS volume is mounted per EC2 instance that is running a kubernetes node. We have 3 EC2, EBS, K8 node setup below. We form a glusterfs cluster using the 3 EBS nodes. We can then define and carveout several persistent volumes (pv) PV1, PV2 … PV5 out of the 3 mounted EBS volumes, making it homogenous and seamless for K8 pods to claim.

K8 schedules pods as per its algorithm on any K8 node and the pods can claim a persistent volume via a persistent volume claim. Persistent volume claim (pvc) is nothing but a label that identifies a connection between a POD and a persitent volume. Per the diagram below we have POD C claim PV1 while POD A claim PV4.

enter image description here