6
votes

I was experimenting with something with Kubernetes Persistent Volumes, I can't find a clear explanation in Kubernetes documentation and the behaviour is not the one I am expecting so I like to ask here.

I configured following Persistent Volume and Persistent Volume Claim.

kind: PersistentVolume
apiVersion: v1
metadata:
  name: store-persistent-volume
  namespace: test
spec:
  storageClassName: hostpath
  capacity:
    storage: 2Gi
  accessModes:
  - ReadWriteOnce
  hostPath:
    path: "/Volumes/Data/data"

---

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: store-persistent-volume-claim
  namespace: test
spec:
  storageClassName: hostpath
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

and the following Deployment and Service configuration.

kind: Deployment
apiVersion: apps/v1beta2
metadata:
  name: store-deployment
  namespace: test
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: store
  template:
    metadata:
      labels:
        k8s-app: store
    spec:
      volumes:
      - name: store-volume
        persistentVolumeClaim:
          claimName: store-persistent-volume-claim
      containers:
      - name: store
        image: localhost:5000/store
        ports:
        - containerPort: 8383
          protocol: TCP
        volumeMounts:
        - name: store-volume
          mountPath: /data

---
#------------ Service ----------------#

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: store
  name: store
  namespace: test
spec:
  type: LoadBalancer
  ports:
  - port: 8383
    targetPort: 8383
  selector:
    k8s-app: store

As you can see I defined '/Volumes/Data/data' as Persistent Volume and expecting that to mount that to '/data' container.

So I am assuming whatever in '/Volumes/Data/data' in the host should be visible at '/data' directory at container. Is this assumption correct? Because this is definitely not happening at the moment.

My second assumption is, whatever I save at '/data' should be visible at host, which is also not happening.

I can see from Kubernetes console that everything started correctly, (Persistent Volume, Claim, Deployment, Pod, Service...)

Am I understanding the persistent volume concept correctly at all?

PS. I am trying this in a Mac with Docker (18.05.0-ce-mac67(25042) -Channel edge), may be it should not work at Mac?

Thx for answers

3
is it typo for the claim name on Deployment? It looks like different with pvc name , store-persistent-volume-claim.Daein Park
yes, typo, sry I fixed it...posthumecaver
The order of activation PV/VPC and pod activation seems to cause this, in particular if adding a PVC to an existing pod I had the same issue. However following the steps as in [configure-persistent-volumes][1] worked fine for me. [1]: kubernetes.io/docs/tasks/configure-pod-container/…void4
Your understanding seems correct, and trying this out locally I am seeing the behaviour you expect to see. I'm using minikube version v1.5.2 and Kubernetes client version v1.16.3, server version v1.16.2.Amit Kumar Gupta
I removed the Service, and the ports in your Deployment, and changed the image to nginx so that this example is minimal and reproducible. I can exec into the pod and write to files in /data and then I see that there when I do minikube ssh and check the host path. Conversely, when I minikube ssh and write into the host path, then later exec into the container, I see the data there. So it is indeed working both ways. I tried this in two scenarios, one where I created the host path directory first before defining the PV, and one where I did not. Both worked the same.Amit Kumar Gupta

3 Answers

2
votes

Assuming you are using multi-node Kubernetes cluster, you should be able to see the data mounted locally at /Volumes/Data/data on the specific worker node that pod is running

You can check on which worker your pod is scheduled by using the command kubectl get pods -o wide -n test

Please note, as per kubernetes docs, HostPath (Single node testing only – local storage is not supported in any way and WILL NOT WORK in a multi-node cluster) PersistentVolume

It does work in my case.

0
votes

As you are using the host path, you should check this '/data' in the worker node in which the pod is running.

-1
votes

Like the guy said above. You need to run a 'kubectl get po -n test -o wide' and you will see the node the pod is hosted on. Then if you SSH that worker you can see the volume