3
votes

I have created a deployment in the xyz-namespace namespace, it has PVC. I can create the deployment and able to access it. It is working properly but while scale the deployment from the Kubernetes console then the pod is pending state only.

persistent_claim:
  apiVersion: v1
  kind: PersistentVolumeClaim
  metadata:
    name: jenkins
  spec:
    accessModes:
    - ReadWriteOnce
    storageClassName: standard
    resources:
      requests:
        storage: 5Gi
namespace: xyz-namespace

and deployment object is like below.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: db-service
  labels:
    k8s-app: db-service
    Name:db-service
    ServiceName: db-service
spec:
  selector:
    matchLabels:
      tier: data
      Name: db-service
      ServiceName: db-service
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: jenkins
        tier: data
        Name: db-service
        ServiceName: db-service
    spec:
      hostname: jenkins
      initContainers:
      - command:
        - "/bin/sh"
        - "-c"
        - chown -R 1000:1000 /var/jenkins_home
        image: busybox
        imagePullPolicy: Always
        name: jenkins-init
        volumeMounts:
        - name: jenkinsvol
          mountPath: "/var/jenkins_home"
      containers:
      - image: jenkins/jenkins:lts
        name: jenkins
        ports:
        - containerPort: 8080
          name: jenkins1
        - containerPort: 8080
          name: jenkins2
        volumeMounts:
        - name: jenkinsvol
          mountPath: "/var/jenkins_home"
      volumes:
      - name: jenkinsvol
        persistentVolumeClaim:
          claimName: jenkins
      nodeSelector:
        nodegroup: xyz-testing
namespace: xyz-namespace
replicas: 1

Deployment is created fine and working as well but When I am trying to Scale the deployment from console then the pod is getting stuck and it's pending state only.

If I removed the persistent volume and then scaled it then it is working fine, but with persistent volume, it is not working.

2
Add output of kubectl get events -AArghya Sadhu
AttachVolume.Attach failed for volume "pvc-123-xyz" : googleapi: Error 400: RESOURCE_IN_USE_BY_ANOTHER_RESOURCE - The disk resource 'projects/abc/zones/abc-a/disks/pvc-123-xyz' is already being used by 'projects/abc/zones/abc-a/instances/instance-123'Hushen
@ArghyaSadhu I have fetched the events and I got the result like above. Can you please check it.Hushen

2 Answers

1
votes

When using standard storage class I assume you are using the default GCEPersisentDisk Volume PlugIn. In this case you cannot set them at all as they are already set by the storage provider (GCP in your case, as you are using GCE perisistent disks), these disks only support ReadWriteOnce(RWO) and ReadOnlyMany (ROX) access modes. If you try to create a ReadWriteMany(RWX) PV that will never come in a success state (your case when set the PVC with accessModes: ReadWriteMany).

Also if any pod tries to attach a ReadWriteOnce volume on some other node, you’ll get following error:

FailedMount Failed to attach volume "pv0001" on node "xyz" with: googleapi: Error 400: The disk resource 'abc' is already being used by 'xyz'

References from above on this article

As mentioned here and here, NFS is the easiest way to get ReadWriteMany as all nodes need to be able to ReadWriteMany to the storage device you are using for your pods.

Then I would suggest you to use an NFS storage option. In case you want to test it, here is a good guide by Google using its Filestore solution which are fully managed NFS file servers.

0
votes

Your PersistentVolumeClaim is set to:

accessModes:
- ReadWriteOnce

But it should be set to:

accessModes:
- ReadWriteMany

The ReadWriteOnce access mode means, that

the volume can be mounted as read-write by a single node [1].

When you scale your deployment it's most likely scaled to different nodes, therefore you need ReadWriteMany.

[1] https://kubernetes.io/docs/concepts/storage/persistent-volumes/