111
votes

How do I move Elasticsearch data from one server to another?

I have server A running Elasticsearch 1.1.1 on one local node with multiple indices. I would like to copy that data to server B running Elasticsearch 1.3.4

Procedure so far

  1. Shut down ES on both servers and
  2. scp all the data to the correct data dir on the new server. (data seems to be located at /var/lib/elasticsearch/ on my debian boxes)
  3. change permissions and ownership to elasticsearch:elasticsearch
  4. start up the new ES server

When I look at the cluster with the ES head plugin, no indices appear.

It seems that the data is not loaded. Am I missing something?

12
By data if you mean indices, you can simply move the indices folder inside elasticsearch/data/<clustername>/nodes/<node id> folder to the new corresponding location. This is elasticsearch directory structure on Windows. Not sure if it's the same on Debian though. But the idea is you can directly move index directories from one cluster to another assuming compatibility is not broken.bittusarkar
Are you sure that ES 1.1.1 and ES 1.3.4 use the same lucene version? This might cause a compatibility issue. Also, there is no guarantee that the ES metadata will be the same. I would suggest to do the copy programmtically. First copy index schemas and then import the data.Zouzias

12 Answers

142
votes

The selected answer makes it sound slightly more complex than it is, the following is what you need (install npm first on your system).

npm install -g elasticdump
elasticdump --input=http://mysrc.com:9200/my_index --output=http://mydest.com:9200/my_index --type=mapping
elasticdump --input=http://mysrc.com:9200/my_index --output=http://mydest.com:9200/my_index --type=data

You can skip the first elasticdump command for subsequent copies if the mappings remain constant.

I have just done a migration from AWS to Qbox.io with the above without any problems.

More details over at:

https://www.npmjs.com/package/elasticdump

Help page (as of Feb 2016) included for completeness:

elasticdump: Import and export tools for elasticsearch

Usage: elasticdump --input SOURCE --output DESTINATION [OPTIONS]

--input
                    Source location (required)
--input-index
                    Source index and type
                    (default: all, example: index/type)
--output
                    Destination location (required)
--output-index
                    Destination index and type
                    (default: all, example: index/type)
--limit
                    How many objects to move in bulk per operation
                    limit is approximate for file streams
                    (default: 100)
--debug
                    Display the elasticsearch commands being used
                    (default: false)
--type
                    What are we exporting?
                    (default: data, options: [data, mapping])
--delete
                    Delete documents one-by-one from the input as they are
                    moved.  Will not delete the source index
                    (default: false)
--searchBody
                    Preform a partial extract based on search results
                    (when ES is the input,
                    (default: '{"query": { "match_all": {} } }'))
--sourceOnly
                    Output only the json contained within the document _source
                    Normal: {"_index":"","_type":"","_id":"", "_source":{SOURCE}}
                    sourceOnly: {SOURCE}
                    (default: false)
--all
                    Load/store documents from ALL indexes
                    (default: false)
--bulk
                    Leverage elasticsearch Bulk API when writing documents
                    (default: false)
--ignore-errors
                    Will continue the read/write loop on write error
                    (default: false)
--scrollTime
                    Time the nodes will hold the requested search in order.
                    (default: 10m)
--maxSockets
                    How many simultaneous HTTP requests can we process make?
                    (default:
                      5 [node <= v0.10.x] /
                      Infinity [node >= v0.11.x] )
--bulk-mode
                    The mode can be index, delete or update.
                    'index': Add or replace documents on the destination index.
                    'delete': Delete documents on destination index.
                    'update': Use 'doc_as_upsert' option with bulk update API to do partial update.
                    (default: index)
--bulk-use-output-index-name
                    Force use of destination index name (the actual output URL)
                    as destination while bulk writing to ES. Allows
                    leveraging Bulk API copying data inside the same
                    elasticsearch instance.
                    (default: false)
--timeout
                    Integer containing the number of milliseconds to wait for
                    a request to respond before aborting the request. Passed
                    directly to the request library. If used in bulk writing,
                    it will result in the entire batch not being written.
                    Mostly used when you don't care too much if you lose some
                    data when importing but rather have speed.
--skip
                    Integer containing the number of rows you wish to skip
                    ahead from the input transport.  When importing a large
                    index, things can go wrong, be it connectivity, crashes,
                    someone forgetting to `screen`, etc.  This allows you
                    to start the dump again from the last known line written
                    (as logged by the `offset` in the output).  Please be
                    advised that since no sorting is specified when the
                    dump is initially created, there's no real way to
                    guarantee that the skipped rows have already been
                    written/parsed.  This is more of an option for when
                    you want to get most data as possible in the index
                    without concern for losing some rows in the process,
                    similar to the `timeout` option.
--inputTransport
                    Provide a custom js file to us as the input transport
--outputTransport
                    Provide a custom js file to us as the output transport
--toLog
                    When using a custom outputTransport, should log lines
                    be appended to the output stream?
                    (default: true, except for `$`)
--help
                    This page

Examples:

# Copy an index from production to staging with mappings:
elasticdump \
  --input=http://production.es.com:9200/my_index \
  --output=http://staging.es.com:9200/my_index \
  --type=mapping
elasticdump \
  --input=http://production.es.com:9200/my_index \
  --output=http://staging.es.com:9200/my_index \
  --type=data

# Backup index data to a file:
elasticdump \
  --input=http://production.es.com:9200/my_index \
  --output=/data/my_index_mapping.json \
  --type=mapping
elasticdump \
  --input=http://production.es.com:9200/my_index \
  --output=/data/my_index.json \
  --type=data

# Backup and index to a gzip using stdout:
elasticdump \
  --input=http://production.es.com:9200/my_index \
  --output=$ \
  | gzip > /data/my_index.json.gz

# Backup ALL indices, then use Bulk API to populate another ES cluster:
elasticdump \
  --all=true \
  --input=http://production-a.es.com:9200/ \
  --output=/data/production.json
elasticdump \
  --bulk=true \
  --input=/data/production.json \
  --output=http://production-b.es.com:9200/

# Backup the results of a query to a file
elasticdump \
  --input=http://production.es.com:9200/my_index \
  --output=query.json \
  --searchBody '{"query":{"term":{"username": "admin"}}}'

------------------------------------------------------------------------------
Learn more @ https://github.com/taskrabbit/elasticsearch-dump`enter code here`
43
votes

Use ElasticDump

1) yum install epel-release

2) yum install nodejs

3) yum install npm

4) npm install elasticdump

5) cd node_modules/elasticdump/bin

6)

./elasticdump \

  --input=http://192.168.1.1:9200/original \

  --output=http://192.168.1.2:9200/newCopy \

  --type=data
23
votes

You can use snapshot/restore feature available in Elasticsearch for this. Once you have setup a Filesystem based snapshot store, you can move it around between clusters and restore on a different cluster

6
votes

I've always had success simply copying the index directory/folder over to the new server and restarting it. You'll find the index id by doing GET /_cat/indices and the folder matching this id is in data\nodes\0\indices (usually inside your elasticsearch folder unless you moved it).

6
votes

I tried on ubuntu to move data from ELK 2.4.3 to ELK 5.1.1

Following are the steps

$ sudo apt-get update

$ sudo apt-get install -y python-software-properties python g++ make

$ sudo add-apt-repository ppa:chris-lea/node.js

$ sudo apt-get update

$ sudo apt-get install npm

$ sudo apt-get install nodejs

$ npm install colors

$ npm install nomnom

$ npm install elasticdump

in home directory goto

$ cd node_modules/elasticdump/

execute the command

If you need basic http auth, you can use it like this:

--input=http://name:password@localhost:9200/my_index

Copy an index from production:

$ ./bin/elasticdump --input="http://Source:9200/Sourceindex" --output="http://username:password@Destination:9200/Destination_index"  --type=data
5
votes

There is also the _reindex option

From documentation:

Through the Elasticsearch reindex API, available in version 5.x and later, you can connect your new Elasticsearch Service deployment remotely to your old Elasticsearch cluster. This pulls the data from your old cluster and indexes it into your new one. Reindexing essentially rebuilds the index from scratch and it can be more resource intensive to run.

POST _reindex
{
  "source": {
    "remote": {
      "host": "https://REMOTE_ELASTICSEARCH_ENDPOINT:PORT",
      "username": "USER",
      "password": "PASSWORD"
    },
    "index": "INDEX_NAME",
    "query": {
      "match_all": {}
    }
  },
  "dest": {
    "index": "INDEX_NAME"
  }
}
4
votes

If you can add the second server to cluster, you may do this:

  1. Add Server B to cluster with Server A
  2. Increment number of replicas for indices
  3. ES will automatically copy indices to server B
  4. Close server A
  5. Decrement number of replicas for indices

This will only work if number of replaces equal to number of nodes.

4
votes

If anyone encounter the same issue, when trying to dump from elasticsearch <2.0 to >2.0 you need to do:

elasticdump --input=http://localhost:9200/$SRC_IND --output=http://$TARGET_IP:9200/$TGT_IND --type=analyzer
elasticdump --input=http://localhost:9200/$SRC_IND --output=http://$TARGET_IP:9200/$TGT_IND --type=mapping
elasticdump --input=http://localhost:9200/$SRC_IND --output=http://$TARGET_IP:9200/$TGT_IND --type=data --transform "delete doc.__source['_id']"
2
votes

We can use elasticdump or multielasticdump to take the backup and restore it, We can move data from one server/cluster to another server/cluster.

Please find a detailed answer which I have provided here.

1
votes

If you simply need to transfer data from one elasticsearch server to another, you could also use elasticsearch-document-transfer.

Steps:

  1. Open a directory in your terminal and run
    $ npm install elasticsearch-document-transfer.
  2. Create a file config.js
  3. Add the connection details of both elasticsearch servers in config.js
  4. Set appropriate values in options.js
  5. Run in the terminal
    $ node index.js
1
votes

You can take a snapshot of the complete status of your cluster (including all data indices) and restore them (using the restore API) in the new cluster or server.

-1
votes

If you don't want to use the elasticdump like a console tool. You can use next node.js script