I am using Google Container Engine to run a StatefulSet for MongoDB replica set (3 replica pods).
This works fine with dynamic provisioning of persistent storage - that is new storage is provisioned for each pod when the stateful set is created.
But if I restart the StatefulSet, it seems I cannot re-bind the old persistent volumes, because new storage will be provisioned again. This means the data is lost. Ideally, the persistent storage should survive the deletion of the Kubernetes cluster itself, with the data preserved and ready to be re-used again in a new cluster.
Is there a way to create GCE Persistent disks and use them in the persistent volume claim of the StatefulSet?
[Updated 20 September 2017]
Found the answer: This is the solution (credit to @RahulKrishnan R A)
create a storage class, specifying the underlying disk type and zone
Create a PersistentVolume that specifies the Storage Class create above, and reference the persistent disk you wish to mount
- Create a PersistentVolumeClaim. It is important to name the PVC
<pvc template name>-<statefulset name>-<ordinal number>
. (The correct name is the trick!) Specify the volumeName as PV created above and storage class. - Create as many PV and PVC as you have replicas with the correct name.
- Create the statefulSet with the PVC template.