0
votes

I'm running a two nodes cluster Elasticsearch, 1 master + 1 node. Everything is running smoothly and all the indices are green, up and running (though no replica right now).

My current elasticsearch configuration is:

path.data = /path/to/data

However I wanted to add an additional path (LVM volume) to expand Elasticsearch's disk size. I did shut down the ES data node, then I changed the elasticsearch.yml conf file as follows:

path.data = ["/path/to/data", "/path/to/newdata"]

And I restarted the data node. The cluster immediately turned red with all the shards unassigned. I also checked the global disable allocation setting and it's:

routing.allocation.disable_allocation: false

I shut down again the node, removed the second path, restarted the cluster and everything went green again. Note that ElasticSearch correctly detected the new data path and indeed the disk space was the increased one.

How can I add a second path to the ES data node to increase disk space and having ElasticSearch correctly recognizing it?

Many thanks in advance for your help!

**** Added ****

Elasticsearch build 1.7.3

_nodes/stats (BEFORE)

     "fs": {
        "timestamp": 1445875849977,
        "total": {
           "total_in_bytes": 50647003136,
           "free_in_bytes": 39121285120,
           "available_in_bytes": 36850778112,
           "disk_reads": 6555,
           "disk_writes": 3959,
           "disk_io_op": 10514,
           "disk_read_size_in_bytes": 117785600,
           "disk_write_size_in_bytes": 34197504,
           "disk_io_size_in_bytes": 151983104,
           "disk_queue": "0",
           "disk_service_time": "0"
        },
        "data": [
           {
              "path": "/data/cluster-name/nodes/0",
              "mount": "/",
              "dev": "/dev/sda1",
              "type": "ext4",
              "total_in_bytes": 50647003136,
              "free_in_bytes": 39121285120,
              "available_in_bytes": 36850778112,
              "disk_reads": 6555,
              "disk_writes": 3959,
              "disk_io_op": 10514,
              "disk_read_size_in_bytes": 117785600,
              "disk_write_size_in_bytes": 34197504,
              "disk_io_size_in_bytes": 151983104,
              "disk_queue": "0",
              "disk_service_time": "0"
           }
        ]
     },

_nodes/stats (AFTER)

"fs": {
        "timestamp": 1445876141872,
        "total": {
           "total_in_bytes": 940360904704,
           "free_in_bytes": 649207984128,
           "available_in_bytes": 626626637824,
           "disk_reads": 8840,
           "disk_writes": 246,
           "disk_io_op": 9086,
           "disk_read_size_in_bytes": 127649792,
           "disk_write_size_in_bytes": 13971456,
           "disk_io_size_in_bytes": 141621248,
           "disk_queue": "0",
           "disk_service_time": "0"
        },
        "data": [
           {
              "path": "/data/cluster-name/nodes/0",
              "mount": "/",
              "dev": "/dev/vda1",
              "type": "ext4",
              "total_in_bytes": 422616936448,
              "free_in_bytes": 131537268736,
              "available_in_bytes": 114234032128,
              "disk_reads": 8524,
              "disk_writes": 232,
              "disk_io_op": 8756,
              "disk_read_size_in_bytes": 126358528,
              "disk_write_size_in_bytes": 13914112,
              "disk_io_size_in_bytes": 140272640,
              "disk_queue": "0",
              "disk_service_time": "0"
           },
           {
              "path": "/data-new/cluster-name/nodes/0",
              "mount": "/data-new",
              "dev": "/dev/mapper/vg0-lvol0",
              "type": "ext4",
              "total_in_bytes": 517743968256,
              "free_in_bytes": 517670715392,
              "available_in_bytes": 512392605696,
              "disk_reads": 316,
              "disk_writes": 14,
              "disk_io_op": 330,
              "disk_read_size_in_bytes": 1291264,
              "disk_write_size_in_bytes": 57344,
              "disk_io_size_in_bytes": 1348608
           }
        ]
     },
1
could you provide your elasticsearch version and output of /_nodes/stats?all=true (fs part only) when both path.data are enabled, please ?Julien C.
I've just added those details! Thanks!int 2Eh
have you anything relevant in logs ?Julien C.
and are permissions the same ?Julien C.
The only relevant thing is the following appearing when I add the second path: ` [2015-10-26 16:17:01,012][DEBUG][action.search.type ] [node-name] All shards failed for phase: [query] org.elasticsearch.action.NoShardAvailableActionException: [data-index][14]` Yes, permissions are the same :(int 2Eh

1 Answers

0
votes

This is because when you add a data path to Elasticsearch you are not telling it to use all like it wishes you are telling it to stripe data to all those paths. So don't think about it like a new pool it can use, but instead as Raid-0.

If you had a cluster with replication of 1 or more and took one node down to add a new path the cluster would discard all data for that node and recreate all the data striped to both paths.