0
votes

I am trying to move my Springboot App with Elasticsearch to run on minikube. But I would like to reuse the existing indexes which I created using local elastic search in the elastic search which runs on the Kubernetes.

I could see some documents about Persistent Volume but I could not find any info about reusing the existing index on the local directory.

Could someone please suggest some info about how to mount existing local directory which contains Elasticsearch indices on Kubernetes.

I am trying to run the Kubernetes cluster locally using minikube. This is my first try with Kubernetes. Therefore, I am not familiar with all aspects of it. I am trying to deploy a spring boot app which connects to Elasticsearch server.

Elasticsearch Server - deployment.yaml

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: elasticsearch
    spec:
      selector:
        matchLabels:
          run: elasticsearch
      replicas: 1
      template:
        metadata:
          labels:
            run: elasticsearch
        spec:
          containers:
            - image: docker.elastic.co/elasticsearch/elasticsearch:6.6.1
              name: elasticsearch
              imagePullPolicy: IfNotPresent
              env:
                - name: discovery.type
                  value: single-node
                - name: cluster.name
                  value: elasticsearch
              ports:
              - containerPort: 9300
                name: nodes
              - containerPort: 9200
                name: client

2
Please upvote my answer if it helped you out.Harshit
@Harshit I did not check it yet. I will accept your answer as soon as I tested it.nantitv

2 Answers

2
votes

Try to create a StatefulSet for the Elasticsearch which will work as Persistent Volume Claim (PVC)

Reference link for PVC: https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/

In the stateful set configuration, you need to mention volumeMounts like this:

volumeMounts:
    - name: elasticsearch
      mountPath: /path/to/local_directory/local_indexes

This helps you to discover your older indexes (created locally) as well as the new ones too!

And to access these indexes as well, I would suggest creating Service, kind of a load balancer as well for the Elasticsearch. It will generate an external IP to access Elasticsearch configurations and indexes within the Kubernetes cluster. You can access by applying curl to the external IP generated.

0
votes

I have posted the answer here Minikube - Not able to get any result from elastic search to if it uses existing indices For the sake of completeness, I am pasting that answer here also. After mounting it, I did not reuse the existing indices, instead I created a new one using the kubernet deployed elastic search pod. I guess, existing indices may have also work but I have changed my code as I was trying different stratagies to make it work.

Solution in HostPath with minikube - Kubernetes worked for me. To mount a local directory into a pod in minikube (version - v1.9.2), you have to mount that local directory into minikube then use minikube mounted path in hostpath (https://minikube.sigs.k8s.io/docs/handbook/mount/).

 minikube mount ~/esData:/indexdata
📁  Mounting host path /esData into VM as /indexdata ...
    ▪ Mount type:   <no value>
    ▪ User ID:      docker
    ▪ Group ID:     docker
    ▪ Version:      9p2000.L
    ▪ Message Size: 262144
    ▪ Permissions:  755 (-rwxr-xr-x)


       ▪ Options:      map[]
        ▪ Bind Address: 192.168.5.6:55230
    🚀  Userspace file server: ufs starting
    ✅  Successfully mounted ~/esData to /indexdata

📌  NOTE: This process must stay alive for the mount to be accessible ...

You have to run minikube mount in a separate terminal because it starts a process and stays there until you unmount.

Instead of doing it as Deployment as in the original question, now I am doing it as Statefulset but the same solution will work for Deployment also.

Another issue which I faced during mounting was elastic search server pod was throwing java.nio.file.AccessDeniedException: /usr/share/elasticsearch/data/nodes . Then I saw here that I have to use initContainers to set full permission in /usr/share/elasticsearch/data/nodes.

Please see my final yaml

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: elasticsearch
spec:
  serviceName: "elasticsearch"
  replicas: 1
  selector:
    matchLabels:
      app: elasticsearch
  template:
    metadata:
      labels:
        app: elasticsearch
    spec:
      initContainers:
      - name: set-permissions
        image: registry.hub.docker.com/library/busybox:latest
        command: ['sh', '-c', 'mkdir -p /usr/share/elasticsearch/data && chown 1000:1000 /usr/share/elasticsearch/data' ]
        volumeMounts:
        - name: data
          mountPath: /usr/share/elasticsearch/data
      containers:
      - name: elasticsearch
        image: docker.elastic.co/elasticsearch/elasticsearch:6.6.1
        env:
        - name: discovery.type
          value: single-node
        ports:
        - containerPort: 9200
          name: client
        - containerPort: 9300
          name: nodes
        volumeMounts:
        - name: data
          mountPath: /usr/share/elasticsearch/data
      volumes:
      - name: data
        hostPath:
          path: /indexdata
---
apiVersion: v1
kind: Service
metadata:
  name: elasticsearch
  labels:
    service: elasticsearch
spec:
  ports:
  - port: 9200
    name: client
  - port: 9300
    name: nodes
  type: NodePort  
  selector:
    app: elasticsearch