5
votes

Summary: I collect the doc ids of all hits for a given search by using a custom Collector (it populates a BitSet with the ids). The searching and getting doc ids are quite fast according to my needs but when it comes to actually fetching the documents from disk, things get very slow. Is there a way to optimize Lucene for faster document collection?

Details: I'm working on a processed corpus of Wikipedia and I keep each sentence as a separate document. When I search for "computer", I get all sentences containing the term computer. Currently, searching the corpus and getting all document ids work in sub-second but fetching the first 1000 documents takes around 20 seconds. Fetching all documents takes proportionally more time (i.e. another 20 sec for each 1000-document batch).

Subsequent searches and document fetching takes much less time (though, I don't know who's doing the caching, OS or Lucene?) but I'll be searching for many diverse terms and I don't want to rely on caching, the performance on the very first search is crucial for me.

I'm looking for suggestions/tricks that will improve the document-fetching performance (if it's possible at all). Thanks in advance!

Addendum:

I use Lucene 3.0.0 but I use Jython to drive Lucene classes. Which means, I call the get_doc method of the following Jython class for every doc id I retrieved during the search:

class DocumentFetcher():  
  def __init__(self, index_name):  
    self._directory = FSDirectory.open(java.io.File(index_name))  
    self._index_reader = IndexReader.open(self._directory, True)  
  def get_doc(self, doc_id):  
    return self._index_reader.document(doc_id)  

I have 50M documents in my index.

2
I have worked with this size of data, but not with these many (50M) documents. 20ms is a "good" response time when you retrieve only few tens of documents, which is typical case. In this case, since you want to retrieve huge data, it feels too slow. If you want significantly better performance, I suppose, you need to use whole lot of memory.Shashikant Kore
I assume a memory-speed trade off would involve some kind of pre-warming Lucene (to load a significant portion the documents into the memory before carrying out the search & fetch operations. Hmm, maybe I can keep the documents in an external DB and hope that DB manages the caching issues better then my custom solution.Ruggiero Spearman
You can make a dummy call to FieldCache.DEFAULT.getStrings() which will load all the values on that field. If that call survives OOME, you should see performance gains with the solution I gave previously.Shashikant Kore
I will give it a try, thanks!Ruggiero Spearman

2 Answers

2
votes

You, probably, are storing lot of information in the document. Reduce the stored fields to as much as you can.

Secondly, while retrieving fields, select only those fields which you need. You can use following method of IndexReader to specify only few of the stored fields.

public abstract Document document(int n, FieldSelector fieldSelector)

This way you don't load up fields which are not used.

You can utilize following code sample.

FieldSelector idFieldSelector = 
new SetBasedFieldSelector(Collections.singleton("idFieldName"), Collections.emptySet());
for (int i: resultDocIDs) {
String id = reader.document(i, idFieldSelector).get("idFieldName");
}
1
votes

Scaling Lucene and Solr discusses many ways to improve Lucene performance. As you are working on Lucene search within Wikipedia, you may be interested in Rainman's Lucene Search of Wikipedia. He mostly discusses algorithms and less performance, but this may still be relevant.