12
votes

I am trying to deploy a helm chart which uses PersistentVolumeClaim and StorageClass to dynamically provision the required sotrage. This works as expected, but I can't find any configuration which allows a workflow like

helm delete xxx

# Make some changes and repackage chart

helm install --replace xxx

I don't want to run the release constantly, and I want to reuse the storage in deployments in the future.

Setting the storage class to reclaimPolicy: Retain keeps the disks, but helm will delete the PVC and orphan them. Annotating the PVC's so that helm does not delete them fixes this problem, but then running install causes the error

Error: release xxx failed: persistentvolumeclaims "xxx-xxx-storage" already exists

I think I have misunderstood something fundamental to managing releases in helm. Perhaps the volumes should not be created in the chart at all.

2
Looks like the problem might have been due to annotating an extra PVC which wasn't supposed to be retained. I would still like to know what the correct approach is thoughuser3125280

2 Answers

10
votes

PersistenVolumeClain creating just a mapping between your actual PersistentVolume and your pod.

Using "helm.sh/resource-policy": keep annotation for PV is not the best idea, because of that remark in a documentation:

The annotation "helm.sh/resource-policy": keep instructs Tiller to skip this resource during a helm delete operation. However, this resource becomes orphaned. Helm will no longer manage it in any way. This can lead to problems if using helm install --replace on a release that has already been deleted, but has kept resources.

If you will create a PV manually after you will delete your release, Helm will remove PVC, which will be marked as "Available" and on next deployment, it will reuse it. Actually, you don't need to keep your PVC in the cluster to keep your data. But, for making it always using the same PV, you need to use labels and selectors.

For keep and reuse volumes you can:

  1. Create PersistenVolume with the label, as an example, for_app=my-app and set "Retain" policy for that volume like this:
apiVersion: v1
kind: PersistentVolume
metadata:
  name: myappvolume
  namespace: my-app
  labels:
    for_app: my-app
spec:
  persistentVolumeReclaimPolicy: Retain
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  1. Modify your PersistenVolumeClaim configuration in Helm. You need to add a selector for using only PersistenVolumes with a label for_app=my-app.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: myappvolumeclaim
  namespace: my-app
spec:
  selector:
    matchLabels:
      for_app: my-app
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi

So, now your application will use the same volume each time when it started.

But, please keep in mind, you may need to use selectors for other apps in the same namespace for preventing using your PV by them.

0
votes

Actually, I'd suggest using StateFul sets and VolumeClaimTemplates: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/

The example there should speak for itself..