3
votes

I have a working Zookeeper ensemble running with 3 instances and also a solrcloud cluster with some solr instances. I've created a collection with settings to 2 shards. Then i:

create 1 core on instance1
create 1 core on instance2
create 1 core on instance1
create 1 core on instance2

Just to have this configuration:

instance1: shard1_leader, shard2_replica
instance2: shard1_replica, shard2_leader

If i add 2 cores to instance1 then 2 cores to instance2, both leaders will be on instance1 and no re-election is done.

instance1: shard1_leader, shard2_leader
instance2: shard1_replica, shard2_replica

Back to my ideal scenario (detached leaders), also when i add a third instance with 2 replicas and kill one of my instances running a leader, the election picks the instance that already have a leader.

My question is why Zookeeper takes this behavior. Shouldn't it distribute leaders? If i deliver some stress to a double-leader instance, is Zookeeper going to run an election?

1

1 Answers

4
votes

Got this answer from Erick Ericson, on lucene forum:

This is probably not all that important to worry about. The additional duties of a leader are pretty minimal. And the leaders will shift around anyway as you restart servers etc. Really feels like a premature optimization.