So here's my dilemma...
I'm running a realtime search index with Solr, indexing about 6M documents per day. The documents expire after about 7 days. So every day, I add 6M documents, and delete 6M documents. Unfortunately, I need to run "optimize" every so often or else I'll run out of disk space.
During "optimize", Solr continues to serve requests for reads, but write requests are blocked. I have all my writes behind a queue, so operationally, everything is fine. However, since my index is so large, "optimize" takes about an hour, and for this hour, no new updates are available for reads. So my index is realtime except for the hour a day that I optimize. During this time, it looks like the index is behind by up to an hour. This is not optimal.
My current solution is this: write all data to two Solr indexes, both behind queues. Alternate "optimize" on the two indexes every 12 hours. During "optimize" of index 1, direct all read traffic to index 2, and vice versa. This time based routing does seem pretty brittle and sloppy though.
Is there a better way?