2
votes

I'm looking for help on how to correctly use local-storage PVCs in kubernetes.

We provisioned a kubespray cluster on Ubuntu with local-storage provisioner enabled.

We try to deploy a stateful set which uses local-storage provisioner like this:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  namespace: ps
  name: ps-r
spec:
  selector:
    matchLabels:
      infrastructure: ps
      application: redis
      environment: staging
  serviceName: hl-ps-redis
  replicas: 1
  template:
    metadata:
      namespace: ps
      labels:
        infrastructure: ps
        application: redis
        environment: staging
    spec:
      terminationGracePeriodSeconds: 10
      containers:
        - name: ps-redis
          image: 1234567890.dkr.ecr.us-west-2.amazonaws.com/redis:latest
          ports:
            - containerPort: 6379
              protocol: TCP
              name: redis
          volumeMounts:
            - name: ps-redis-redis
              mountPath: /data
  volumeClaimTemplates:
    - metadata:
        namespace: project-stock
        name: ps-redis-redis
        labels:
          infrastructure: ps
          application: redis
          environment: staging
      spec:
        storageClassName: local-storage
        accessModes: [ "ReadWriteOnce" ]
        resources:
          requests:
            storage: 1Gi

The PVC is being created, but hangs in Pending state:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: ps-redis-redis-ps-r-0
  namespace: project-stock
  selfLink: >-
    /api/v1/namespaces/project-stock/persistentvolumeclaims/ps-redis-redis-ps-r-0
  uid: 2fac22e3-c3dc-4cbf-aeed-491f12b430e8
  resourceVersion: '384774'
  creationTimestamp: '2020-11-10T08:25:39Z'
  labels:
    application: redis
    environment: staging
    infrastructure: ps
  finalizers:
    - kubernetes.io/pvc-protection
  managedFields:
    - manager: kube-controller-manager
      operation: Update
      apiVersion: v1
      time: '2020-11-10T08:25:39Z'
      fieldsType: FieldsV1
      fieldsV1:
        'f:metadata':
          'f:labels':
            .: {}
            'f:application': {}
            'f:environment': {}
            'f:infrastructure': {}
        'f:spec':
          'f:accessModes': {}
          'f:resources':
            'f:requests':
              .: {}
              'f:storage': {}
          'f:storageClassName': {}
          'f:volumeMode': {}
        'f:status':
          'f:phase': {}
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: local-storage
  volumeMode: Filesystem
status:
  phase: Pending

Storage class:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: local-storage
  selfLink: /apis/storage.k8s.io/v1/storageclasses/local-storage
  uid: c29adff6-a8a2-4705-bb3b-155e1f7c13a3
  resourceVersion: '1892'
  creationTimestamp: '2020-11-09T12:09:56Z'
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: >
      {"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"local-storage"},"provisioner":"kubernetes.io/no-provisioner","volumeBindingMode":"WaitForFirstConsumer"}
  managedFields:
    - manager: kubectl
      operation: Update
      apiVersion: storage.k8s.io/v1
      time: '2020-11-09T12:09:56Z'
      fieldsType: FieldsV1
      fieldsV1:
        'f:metadata':
          'f:annotations':
            .: {}
            'f:kubectl.kubernetes.io/last-applied-configuration': {}
        'f:provisioner': {}
        'f:reclaimPolicy': {}
        'f:volumeBindingMode': {}
provisioner: kubernetes.io/no-provisioner
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer

The pod did not start: 0/2 nodes are available: 1 Insufficient cpu, 1 node(s) didn't find available persistent volumes to bind.

What we are doing wrong?

1
As you have just 1 replica so it choose 1 of the nodes to create everything here, it created the pvc but as mentioned in the error, the first node doesn't have anymore cpu to deploy the pod here, pod can't be deployed on second node as there is no pvc. If there are more resources on the second node try to deploy everything here, you can use nodeAffinity for that, there is an example. You can also increase your cpu on the first node and it should work. Let me know if that answer your question.Jakub
The node which has insufficient CPU is a master node. PV was not created at all. PVC is created but is pending and not assigned to any of the nodes.roman
That's why you have to add more cpu to the master node or deploy all these dependencies(pod,pv,pvc) on your worker node. If you have insufficient CPU you won't deploy that pod on your master node.Jakub
It won't deploy on master because of the taints. Adding node affinity to deploy on worker did not help as well.roman
It is exactly the same. I guess we must mount the OS volumes to be used by local-storage manually. We switched to local-path-provisioner and it works now just fine.roman

1 Answers

1
votes

We finally solved this task using https://github.com/rancher/local-path-provisioner which is much easier to get configured properly.