5
votes

We have been using this link as a reference to accommodate any change in the mappings for a field in our index with zero downtime.

Question: Considering the same example taken in the above link, when we reindex the data from my_index_v1 to my_index_v2 using _reindex API. Does ElasticSearch guarantee that any concurrent updates happening in my_index_v1 would make it to my_index_v2 for sure?

For example, a document might get updated in my_index_v1 before or after it is reindexed by api to my_index_v2.

Ultimately, we just need to ensure that while we did not want any downtime for doing any mapping changes (hence did _reindex using alias and other cool stuff by ES), we also want to ensure that none of the add/update were missed while this huge reindex was in progress, as we are talking about reindexing >50GB data.

Thanks,
Sandeep

3

3 Answers

3
votes

The reindex api will not consider the changes made after the process has started.. One thing you can do is once you are done reindexing process.You can again start process with version_type:external. This will cause only documents from source index to destination index that have different version and are not present

Here is the example

POST _reindex
{
  "source": {
    "index": "twitter"
  },
  "dest": {
    "index": "new_twitter",
    "version_type": "external"
  }
}

Setting version_type to external will cause Elasticsearch to preserve the version from the source, create any documents that are missing, and update any documents that have an older version in the destination index than they do in the source index:

4
votes

One way to solve this is by using two aliases instead of one. One for queries (let’s call it read_alias), and one for indexing (write_alias). We can write our code so that all indexing happens through the write_alias and all queries go through the read_alias. Let's consider three periods of time:

Before rebuild

read_alias: points to current_index

write_alias: points to current_index enter image description here

All queries return current data.

All modifications go into current_index.

During rebuild

read_alias: points to current_index

write_alias: points to new_index enter image description here

All queries keep getting data as it existed before the rebuild, since searching code uses read_alias.

All rows, including modified ones, get indexed into the new_index, since both the rebuilding loop and the DB trigger use the write_alias.

After rebuild

read_alias: points to new_index

write_alias: points to new_index enter image description here

All queries return new data, including the modifications made during rebuild.

All modifications go into new_index.

It should even be possible to get the modified data from queries while rebuilding, if we make the DB trigger code index modified rows into both the indices while the rebuild is going on (i.e., while the aliases point to different indices).

enter image description here

It is often better to rebuild the index from source data using custom code instead of relying on the _reindex API, since that way we can add new fields that may not have been stored in the old index.

This article has some more details.

1
votes

It looks like it does it based off of snapshots of the source index.

Which would suggest to me that they couldn't reasonably honor changes to the source happening in the middle of the process. You avoid downtime on the search side, but I think you would need to pause updates on the indexing side during this process.

Something you could do is keep track on your index of when the document was last modified. Then once you finish indexing and switch the alias, you query the old index for what changed in the middle. Propagate those changes over to the new index and you get eventual consistency.