0
votes

I'm trying to provide a PVC to a pod deployment and I'm facing this error:

Failed to provision volume with StorageClass "xxxxxxxxxxx": could not get storage key for storage account yyyyyyyyyyy: could not get storage key for storage account yyyyyyyyyyy: Retriable: false, RetryAfter: 0s, HTTPStatusCode: 400, RawError: Retriable: false, RetryAfter: 0s, HTTPStatusCode: 400, RawError: azure.BearerAuthorizer#WithAuthorization: Failed to refresh the Token for request to http://localhost:7788/subscriptions/zzzzzzzzzzz-aaaaaa-bbbbbb/resourceGroups/MC_kkkkkkkkkkkkkkkkkkkk/providers/Microsoft.Storage/storageAccounts/yyyyyyyyyyyyyyy/listKeys?api-version=2019-06-01: StatusCode=400 -- Original Error: adal: Refresh request failed. Status Code = '400'. Response body: {"error":"unauthorized_client","error_description":"AADSTS700016: Application with identifier 'aaaaaa-bbbbbbbb-cccccccccccccccc' was not found in the directory 'ppppppppppp-aaaaaaaaaaaa-tttttttttttt'. This can happen if the application has not been installed by the administrator of the tenant or consented to by any user in the tenant. You may have sent your authentication request to the wrong tenant.

I'm pretty new to AKS and I believe there's something very primary I'm missing, but haven't found any help on the web.

This is what I've already double-checked:

  • Using correct account login and subscription
  • The storage account referred do exists
  • It is in the same region and resource group as the AKS cluster

Storage class manifest

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: xxxxxxxx
provisioner: kubernetes.io/azure-file
parameters:
  skuName: Standard_LRS
  storageAccount: yyyyyyyyyyyy
  resourceGroup: MC_zzzzzzzzzzzzzzzzz

PVC Manifest

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  accessModes:
  - ReadWriteMany
  storageClassName: xxxxxxxx
  resources:
    requests:
      storage: 5Gi 

I'm using Lens to manage my cluster. The PVC resource hangs in the Pending state: PVC deployment pending status

Can you guys help me to figure it out?

1

1 Answers

2
votes

According to the github issue here this happens if the cluster does not have service principal or the service principal is expired after validity of 1 year.

You can verify it by running below command. Retrieve the details by opening /etc/kubernetes/azure.json file on any master node or agent node.

az login --service-principal -u <aadClientId> -p <aadClientSecret> -t <tenantId>

Updating or rotating the credential following the doc should solve it.

Alternatively, you can use a managed identity for permissions instead of a service principal. Managed identities are easier to manage than service principals and do not require updates or rotations. For more information, see Use managed identities