0
votes

I have a GKE cluster running on Google Cloud. I created a persistence volume and mounted my deployments, So the connectivity between my application and persistence are bounded successfully.

I also have filebeat running on the same cluster using the below link https://github.com/elastic/beats/blob/master/deploy/kubernetes/filebeat-kubernetes.yaml

Both the application and filebeat also mounted successfully. The PV volume are created using access modes: ReadWriteOnce which is with GCE. But my cluster has many nodes running and my application is not mounted for all running pods. In google Cloud PV volumes are not supporting access modes: ReadWriteMany. So My filebeat too fails because of the application not mounted properly and filebeat has the capability of running in many nodes using deamonset. Is there a way to resolve the above issue.

2
I have posted an answer regarding FileBeat (it uses volumes different than apps). For what reason does your apps need volumes? Is that only for logging? hint: they should log to stdout.Jonas

2 Answers

1
votes

FileBeat should use volumes a bit different than volumes. Typically applications logs to stdout and then the container runtime (e.g. Docker daemon or containerd) persist the logs on the local node.

FileBeat need to run on every node, so it should be deployed using DaemonSet as you say. But it should also mount the volumes from the node using hostPath volumes.

See this part of the DaemonSet that you linked (no Persistent Volumes is used here):

      volumes:
      - name: config
        configMap:
          defaultMode: 0640
          name: filebeat-config
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: varlog
        hostPath:
          path: /var/log
      # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
      - name: data
        hostPath:
          # When filebeat runs as non-root user, this directory needs to be writable by group (g+w).
          path: /var/lib/filebeat-data
          type: DirectoryOrCreate
0
votes

You want to implement access modes: ReadWriteMany yes there are some option like you can use NFS or Gluster FS cluster.

if you just want do read option you can add the disk PD : https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/readonlymany-disks access mode will be : readonlymany

You can use the File store : https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes

Also you can use the cloud volume services : https://cloud.google.com/solutions/partners/netapp-cloud-volumes/overview (NFS & SMB)

If you just want readonlymany you can go with disk however if you want to perform write operation also you have to use NFS.

Cloud Filestore example :

First create a Filestore instance.

gcloud filestore instances create nfs-server
    --project=[PROJECT_ID]
    --zone=us-central1-c
    --tier=STANDARD
    --file-share=name="vol1",capacity=1TB
    --network=name="default",reserved-ip-range="10.0.0.0/29"

Then create a persistent volume in GKE.

    apiVersion: v1
kind: PersistentVolume
metadata:
 name: fileserver
spec:
 capacity:
   storage: storage
 accessModes:
 - ReadWriteMany
 nfs:
   path: /file-share
   server: ip-address

[IP_ADDRESS] will be available in the filestore console.

Now create persistent volume claim.

    apiVersion: v1
kind: PersistentVolumeClaim
metadata:
 name: fileserver-claim
spec:
 accessModes:
 - ReadWriteMany
 storageClassName: ""
 volumeName: fileserver
 resources:
   requests:
     storage: storage

Finally, mount the volume in your pod.

apiVersion: v1
kind: Pod
metadata:
 name: my-pod
spec:
 containers:
 - name: container-name
   image: image-name
   volumeMounts:
   - mountPath: mount-path
     name: mypvc
 volumes:
 - name: mypvc
   persistentVolumeClaim:
     claimName: claim-name
     readOnly: false

Document : https://cloud.google.com/filestore/docs/accessing-fileshares