I have been tasked with dealing with OutOfMemoryError problems on a Solr installation. I have finally managed to get it to stay up for more than a few minutes by using the AggressiveHeap JVM option.
I have never worked with Solr, so am feeling my way a bit.
This is the process of steps that we take :
- Start Tomcat
- Kick off a delta-import
After the delta-import is started, the heap consumption rises inexorably. We tried with Xmx set to 4 Gigs which caused OutOfMemoryErrors or the system to become unresponsive, so tried the AggressiveHeap option, which caused the JVM to take about 5.5 Gigs of RAM. As you can see in the screenie, this time the GC was able to free memory, the memory consumtion becomes less quick, and then to the right of the image there is another GC which actually works, and it keeps going like that.
What is this initial allocation of memory? Is it the index being loaded into RAM? Is there a way to reduce this?
I have tried tweaking ramBufferSizeMB, maxBufferedDocs, mergeFactor and have also uncommented the StandardIndexReaderFactory's declaration to let me set termIndexDivisor to 12, but it is hard to see whether these changes have made any difference or not (yes: more analysis is needed).
The index has been created over a number of failed indexing sessions - the addition of the termIndexDivisor parameter is more recent - does the fact that the index files already exist stop this parameter from having any effect?
(The machine is physical, has 12 gigs of ram and 16 cores. It is sharing the machine with another large Tomcat instance. We are running Oracle JDK 1.6 21)