0
votes

I have a SolrCloud (version 6) installation with replication factor 3 and 150 shards across 30 servers.

I see strange behavior after restarting Solr on a single server: sometimes everything is ok and Solr comes up without any problems after replaying commit logs. But more often it starts full recovery from it's replicas. Also it sometimes the recovery is all shards on this node or just a few of them. There are no warning/error logs about any failures before recovering.

Is it possible to stop Solr gracefully?

Also I can't understand why Solr performs loading all data files from replica's index from each shard instead of loading latest changes.

1
Can you share what are your autoCommit settings? My guess is that during the time the Solr node has stopped and restarted other replicas have received updates and or ZooKeeper has a new version # in it's internal state which would cause a Solr recovery on restart.kellyfj
I have solr.autoSoftCommit.maxTime = 60000 solr.autoCommit.maxTime = 600000Vadim PS

1 Answers

0
votes

Your autoCommit setting of 600000 is very high (600 seconds). What does that mean for SolrCloud practically?

It means that the transaction log has been flushed, but not fsync’d. On a Solr node restart, the node contacts the Cluster leader and either

Replays the documents from its own tlog if < 100 new updates have been received by the leader.

OR

Does an old-style full replication from the leader to catch up if the leader received > 100 updates while the node was offline.

https://lucidworks.com/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/

My guess is in your case you are getting the latter. Just be sure to shutdown gracefully via the Solr scripts - make sure you're not doing any "kill -9" and/or ensure that Solr isn't dying to to heap memory problems.

Also one problem I've seen (in SolrCloud 5.3 anyways) is that if you restart a Solr node before ZooKeeper realizes the node has "gone" that SolrCloud can set ZooKeeper into a funky state where it thinks that the Solr node is running but it's not. So one thing I like to do generally is check that all the other nodes know the right state of the system (that a node is "gone") before restarting it.