0
votes

I want to know if it's possible that multiple PersistentVolumeClaims bind to the same local persistent volume.

My use-case if the following: I want to build a Daemon Set that will write some data (the same data actually) on each node on my cluster (on the node's local disk). Then, any other pod that is scheduled on any node should be able to read that data. Basically a kind of write-once-read-many policy at the node level.

I know that I can do that using the hostPath type of volume, but it's a bit hard to manage so I found out that local-storage would be a better approach.

My wish would be the following:

  • Create the local Persistent Volume (named pv) with ReadWriteOnce and ReadOnlyMany access modes
  • Create the first persistent volume claim (pvc1) with ReadWriteOnce access mode and use it in the DaemonSet that writes the data in the volume. So pvc1 should bind to pv
  • Create the second persistent volume claim (pvc2) with ReadOnlyMany access mode that is used in any other pod that reads that data (so pvc2 should also bind to pv)

Is this possible?

I read that if a PVC is bounded to a PV, then that PV is "locked", meaning that no other PVC can bind to it. Is this really how it works? If seems a bit limiting for that kind of scenarios, where we have write-once-read-many operations.

Thanks!

1

1 Answers

2
votes

DaemonSets and PVCs for RWO volume types do not mix well because all the DaemonSets will share the same PVC. And for local volumes, that would result in only one replica to be scheduled since it restricts all Pods using that PVC to only get scheduled to one node.

You could potentially solve this by using a StatefulSet, which supports volumeClaimTemplates that creates a PVC per replica, and have it scale to the number of nodes in the cluster. However, your user pods would then need to know and pick a specific PVC to use, rather than use whatever is on that node.

I think your use case would be better addressed by writing a CSI driver. It has a DaemonSet component, which on driver startup can initialize the data. Then when it implements NodePublishVolume (aka mount into the pod), it can bind-mount the data directory into the pod's container. You can make this volume type RWX, and you probably don't need to implement any of the controller routines for provisioning or attaching.