1
votes

I'm trying to mount mongo /data directory on to a NFS volume in my kubernetes master machine for persisting mongo data. I see the volume is mounted successfully but I can see only configdb and db dirs but not their subdirectories. And I see the data is not even persisting in the volume. when I kubectl describe <my_pv> it shows NFS (an NFS mount that lasts the lifetime of a pod)

Why is that so?

I see in kubernetes docs stating that:

An nfs volume allows an existing NFS (Network File System) share to be mounted into your pod. Unlike emptyDir, which is erased when a Pod is removed, the contents of an nfs volume are preserved and the volume is merely unmounted. This means that an NFS volume can be pre-populated with data, and that data can be “handed off” between pods. NFS can be mounted by multiple writers simultaneously.

I'm using kubernetes version 1.8.3.

mongo-deployment.yml:

apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: mongo
  labels:
    name: mongo
    app: mongo
spec:
  replicas: 3
  selector:
    matchLabels:
      name: mongo
      app: mongo
  template:
    metadata:
      name: mongo
      labels:
        name: mongo
        app: mongo
    spec:
      containers:
        - name: mongo
          image: mongo:3.4.9
          ports:
            - name: mongo
              containerPort: 27017
              protocol: TCP
          volumeMounts:
            - name: mongovol
              mountPath: "/data"
      volumes:
      - name: mongovol
        persistentVolumeClaim:
          claimName: mongo-pvc

mongo-pv.yml:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: mongo-pv
  labels:
    type: NFS
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: slow
  mountOptions:
    - hard
    - nfsvers=4.1
  nfs:
    path: "/mongodata"
    server: 172.20.33.81

mongo-pvc.yml:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mongo-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 3Gi
  storageClassName: slow
  selector:
    matchLabels:
      type: NFS

The way I mounted my nfs share on my kubernetes master machine:

1) apt-get install nfs-kernel-server
2) mkdir /mongodata
3) chown nobody:nogroup -R /mongodata
4) vi /etc/exports
5) added the line "/mongodata *(rw,sync,all_squash,no_subtree_check)"
6) exportfs -ra
7) service nfs-kernel-server restart
8) showmount -e ----> shows the share

I logged into the bash of my pod and I see the directory is mounted correctly but data is not persisting in my nfs server (kubernetes master machine).

Please help me see what I am doing wrong here.

3

3 Answers

1
votes

It's possible that pods don't have permission to create files and directories. You can exec to your pod and try to touch a file in NFS share if you get permission error you can ease up permission on file system and exports file to allow write access.

It's possible to specify GID in PV object to avoid permission denied issues. https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#access-control

0
votes

I see you did a chown nobody:nogroup -R /mongodata. Make sure that the application on your pod runs as nobody:nogroup

0
votes

Add the parameter mountOptions: "vers=4.1" to your StorageClass config, this should fix your issue.

See this Github comment for more info:

https://github.com/kubernetes-incubator/external-storage/issues/223#issuecomment-344972640