6
votes

I have a cluster with one node (by local). Health cluster is yellow. Now I add more one node, but shards can not be allocated in second node. So the health of my cluster is still yellow. I can not change this state to green, not like as this guide:health cluster example.

So how to change health state to green?

My cluster:enter image description here Cluster health:

curl -XGET 'http://localhost:9200/_cluster/health?pretty=true'  
{
 "cluster_name" : "astrung",
 "status" : "yellow",
 "timed_out" : false,
 "number_of_nodes" : 2,
 "number_of_data_nodes" : 2,
 "active_primary_shards" : 22,
 "active_shards" : 22,
 "relocating_shards" : 0,
 "initializing_shards" : 2,
 "unassigned_shards" : 20
}

Shard status:

curl -XGET 'http://localhost:9200/_cat/shards?v'
 index          shard prirep state        docs  store ip          node 
 _river         0     p      STARTED         2  8.1kb 192.168.1.3 One  
 _river         0     r      UNASSIGNED                                
 megacorp       4     p      STARTED         1  3.4kb 192.168.1.3 One  
 megacorp       4     r      UNASSIGNED                                
 megacorp       0     p      STARTED         2  6.1kb 192.168.1.3 One  
 megacorp       0     r      UNASSIGNED                                
 megacorp       3     p      STARTED         1  2.2kb 192.168.1.3 One  
 megacorp       3     r      UNASSIGNED                                
 megacorp       1     p      STARTED         0   115b 192.168.1.3 One  
 megacorp       1     r      UNASSIGNED                                
 megacorp       2     p      STARTED         1  2.2kb 192.168.1.3 One  
 megacorp       2     r      UNASSIGNED                                
 mybucket       2     p      STARTED         1  2.1kb 192.168.1.3 One  
 mybucket       2     r      UNASSIGNED                                
 mybucket       0     p      STARTED         0   115b 192.168.1.3 One  
 mybucket       0     r      UNASSIGNED                                
 mybucket       3     p      STARTED         2  5.4kb 192.168.1.3 One  
 mybucket       3     r      UNASSIGNED                                
 mybucket       1     p      STARTED         1  2.2kb 192.168.1.3 One  
 mybucket       1     r      UNASSIGNED                                
 mybucket       4     p      STARTED         1  2.5kb 192.168.1.3 One  
 mybucket       4     r      UNASSIGNED                                
 .kibana        0     r      INITIALIZING             192.168.1.3 Two  
 .kibana        0     p      STARTED         2  8.9kb 192.168.1.3 One  
 .marvel-kibana 2     p      STARTED         0   115b 192.168.1.3 One  
 .marvel-kibana 2     r      UNASSIGNED                                
 .marvel-kibana 0     r      INITIALIZING             192.168.1.3 Two  
 .marvel-kibana 0     p      STARTED         1  2.9kb 192.168.1.3 One  
 .marvel-kibana 3     p      STARTED         0   115b 192.168.1.3 One  
 .marvel-kibana 3     r      UNASSIGNED                                
 .marvel-kibana 1     p      STARTED         0   115b 192.168.1.3 One  
 .marvel-kibana 1     r      UNASSIGNED                                
 .marvel-kibana 4     p      STARTED         0   115b 192.168.1.3 One  
 .marvel-kibana 4     r      UNASSIGNED                                
 user_ids       4     p      STARTED        11    5kb 192.168.1.3 One  
 user_ids       4     r      UNASSIGNED                                
 user_ids       0     p      STARTED         7 25.1kb 192.168.1.3 One  
 user_ids       0     r      UNASSIGNED                                
 user_ids       3     p      STARTED        11  4.9kb 192.168.1.3 One  
 user_ids       3     r      UNASSIGNED                                
 user_ids       1     p      STARTED         8 28.7kb 192.168.1.3 One  
 user_ids       1     r      UNASSIGNED                                
 user_ids       2     p      STARTED        11  8.5kb 192.168.1.3 One  
 user_ids       2     r      UNASSIGNED                              
3
If you have a large amount (e.g. gigabytes) of data then moving shards around can take a long time (e.g. hours). can you update your question with your cluster status - curl -XGET 'http://localhost:9200/_cluster/health?pretty=true' and list of shards - curl -XGET 'http://localhost:9200/_cat/shards?v'Olly Cruickshank
"number_of_nodes" : 2, "number_of_data_nodes" : 2, "active_primary_shards" : 22, "active_shards" : 22, "relocating_shards" : 0, "initializing_shards" : 2, "unassigned_shards" : 20 }voxter
so how to move unassigned_shards??is this reason for cluster health?voxter
Can you run both the commands I posted above and update your question with the results? if your shards are initialising then its likely they are moving data around - this is a slow process if you have a large amount of data.Olly Cruickshank
Edit question.I hope u will read this result:))voxter

3 Answers

33
votes

I suggest updating the replication factor of all the indices to 0 and then update it back to 1.

Here's a curl to get you started

curl -XPUT 'http://localhost:9200/_settings' -H 'Content-Type: application/json' -d '
{
    "index" : {
        "number_of_replicas" : 0
    }
}'
1
votes

thou recovery normally takes a long time, looking at the number and size of your documents, it should take a very sort time to recover.

Looks like you have issues with the nodes contacting each other, check firewall rules, ensure ports 9200 and 9300 are reachable from each.

1
votes

like @mohitt said above, update number_of_replicas to zero(for local dev only,be careful to use in production)

you can run the below in Kibana DevTool Console:

PUT _settings
{
    "index" : {
        "number_of_replicas" : 0
    }
}