6
votes

I'm trying to run the Jenkins Helm chart. As part of this setup, I'd like to pass in a persistent volume that I provisioned ahead of time (or perhaps exported from another cluster during a migration).

I'm trying to get my persistent volume (PV) and persistent volume claim (PVC) setup in a such a way that when Jenkins starts, it uses my predefined PV and PVC.

I think the problem originates from the persistent storage definition for the Azure disk points to a VHD in my storage account. Is there any way to point it to an existing managed disk -and not a blob?

This is how I setup my persistent storage using Azure Managed Disk

apiVersion: v1
kind: PersistentVolume
metadata:
  name: jenkins-home
spec:
  capacity:
    storage: 10Gi
  storageClassName: default
  azureDisk:
    diskName: jenkins-home
    diskURI: https://<storageaccount>.blob.core.windows.net/jenkins-data/jenkins-home.vhd
    fsType: ext4
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  claimRef:
    name: jenkins-home-pvc
    namespace: default
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: jenkins-home-pvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  storageClassName: default

I then start helm like this...

helm install --name jenkins stable/jenkins --values=values.yaml

Where my values.yaml file looks like

Persistence:
  ExistingClaim: jenkins-home-pvc

Here is the error I receive when the Jenkins' pod starts.

AttachVolume.Attach failed for volume "jenkins-home" : Attach volume "jenkins-home" to instance "aks-agentpool-40897452-0" failed with compute.VirtualMachinesClient#CreateOrUpdate: Failure responding to request: StatusCode=409 -- Original Error: autorest/azure: Service returned an error. Status=409 Code="OperationNotAllowed" Message="Addition of a blob based disk to VM with managed disks is not supported."

1

1 Answers

9
votes

I posed this question to the Azure team here.

Through their help I arrived at the following solution...

I had tried to use the managed disk resource ID before but it yelled at me saying it expected a .vhd file. But after adding 'kind: Managed', it was perfectly happy to take the managed disk resource id.

Creating an empty and formatted managed disk is of course a pre-requisite for this to work. Copying the managed disk into the same resource group as the AKS cluster was also required.

So now my PV and PVC look like this and it's working...

apiVersion: v1
kind: PersistentVolume
metadata:
  name: jenkins-home
spec:
  capacity:
    storage: 10Gi
  storageClassName: default
  azureDisk:
    kind: Managed
    diskName: jenkins-home
    diskURI: /subscriptions/{subscription-id}/resourceGroups/{aks-controlled-resource-group-name}/providers/Microsoft.Compute/disks/jenkins-home
    fsType: ext4
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  claimRef:
    name: jenkins-home-pvc
    namespace: default
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: jenkins-home-pvc
  annotations:
    volume.beta.kubernetes.io/storage-class: default
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  storageClassName: default