0
votes

I have a NFS server running on a VM in GCE. The NFS server /etc/exports file is already configured to allow mounting by the K8s cluster. I attempted to create a Persistent Volume (PV) and Persistent Volume Claim (PVC) and I added spec.containers.volumeMounts and spec.volumes entries. Basically, Google provisions a new disk instead of connecting to the NFS server.

In the deployments file I have added:

          volumeMounts:
            - name: nfs-pvc-data
              mountPath: "/mnt/nfs"
      volumes:
        - name: nfs-pvc-data
          persistentVolumeClaim:
            claimName: nfs-pvc

nfs-pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv
  namespace: comms
  labels:
    volume: nfs-pv
spec:
  capacity:
    storage: 10G
  #volumeMode: Filesystem
  accessModes:
    - ReadOnlyMany
  mountOptions:
    - hard
    - nfsvers=4.2
  nfs:
    path: /
    server: 10.xxx.xx.xx
    readOnly: false

nfs-pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc
  namespace: comms
#spec: # The PVC is stuck on ready as long as spec.selector exists in the yaml
  #selector:
    #matchLabels:
      #volume: nfs-pv
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1G
1
do you want that new disk will automatically mount on GCE NFS server?Mahboob
The NFS server is already configured and running. I don't want a new disk to be created. I simply would like to mount the NFS drive that already exists. Is this not possible? I thought about simply adding the nfs client to the container and connecting that way.Andrew Taylor

1 Answers

0
votes

I was able to mount the NFS server external to the Kubernetes cluster using the following PV and PVC YAML.

apiVersion: v1
kind: PersistentVolume
metadata:
  name: gce-nfs
  namespace: comms
spec:
  capacity:
    storage: 1Mi
  accessModes:
    - ReadWriteMany
  nfs:
    server: 10.xxx.xxx.xxx
    path: "/"

---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: gce-nfs
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: ""
  resources:
    requests:
      storage: 1Mi

Then in the deployment YAML I added the following to the containers section:

      volumeMounts:
        - name: nfs-pvc-data
          mountPath: "/mnt/nfs"
  volumes:
    - name: nfs-pvc-data
      persistentVolumeClaim:
        claimName: gce-nfs

The pod/container will boot connected to the external NFS server.