0
votes

I am running a 2 node cluster on version 5.6.12

I followed the following rolling upgrade guide: https://www.elastic.co/guide/en/elasticsearch/reference/5.6/rolling-upgrades.html

After reconnecting the last upgraded node back into my cluster, the health status remained as yellow due to unassigned shards.

Re-enabling shard allocation seemed to have no effect:

PUT _cluster/settings
{
  "transient": {
    "cluster.routing.allocation.enable": "all"
  }
}

My query results when checking cluster health:

GET _cat/health:

1541522454 16:40:54 elastic-upgrade-test yellow 2 2 84 84 0 0 84 0 - 50.0%

GET _cat/shards:

v2_session-prod-2018.11.05         3 p STARTED     6000   1016kb xx.xxx.xx.xxx node-25
v2_session-prod-2018.11.05         3 r UNASSIGNED                              
v2_session-prod-2018.11.05         1 p STARTED     6000  963.3kb xx.xxx.xx.xxx node-25
v2_session-prod-2018.11.05         1 r UNASSIGNED                              
v2_session-prod-2018.11.05         4 p STARTED     6000 1020.4kb xx.xxx.xx.xxx node-25
v2_session-prod-2018.11.05         4 r UNASSIGNED                              
v2_session-prod-2018.11.05         2 p STARTED     6000  951.4kb xx.xxx.xx.xxx node-25
v2_session-prod-2018.11.05         2 r UNASSIGNED                              
v2_session-prod-2018.11.05         0 p STARTED     6000  972.2kb xx.xxx.xx.xxx node-25
v2_session-prod-2018.11.05         0 r UNASSIGNED                              
v2_status-prod-2018.11.05          3 p STARTED     6000  910.2kb xx.xxx.xx.xxx node-25
v2_status-prod-2018.11.05          3 r UNASSIGNED   

Is there another way to try and get shards allocation working again so I can get my cluster health back to green?

1
Could you clarify your question?GrafiCode
@GrafiCodeStudio I have added to my question, hopefully it's clearer?juiip

1 Answers

0
votes

The other node within my cluster had a "high disk watermark [90%] exceeded" warning message so shards were "relocated away from this node".

I updated the config to:

cluster.routing.allocation.disk.watermark.high: 95%

After restarting the node, shards began to allocate again.

This is a quick fix - I will also attempt to increase the disk space on this node to ensure I don't lose reliability.