1
votes

We are using the Bitnami version of Solr 4.6.0-1 on a 64bit windows installation with 64bit Java 1.7U51 and we are seeing consistent issues with PermGen exceptions. We have the permgen configured to be 512MB. Bitnami ships with a 32bit version of Java for windows and we are replacing it with a 64bit version.

Passed in Java Options:

-XX:MaxPermSize=512M
-Xms3072M
-Xmx6144M
-XX:+UseParNewGC
-XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75
-XX:+CMSClassUnloadingEnabled
-XX:NewRatio=3

-XX:MaxTenuringThreshold=8

This is our use case:

We have what we call a database core which remains fairly static and contains the imported contents of a table from SQL server. We then have user cores which contain the record ids of results from a text search outside of Solr. We then query for the data we want from the database core and limit the results to the content of the user core. This allows us to combine facet data from Solr with the search results from another engine. We are creating the user cores on demand and removing them when the user logs out.

Our issue is the constant creation and removal of user cores combined with the constant importing seems to push us over our PermGen limit. The user cores are removed at the end of every session and as a test I made an application that would loop creating the user core, import a set of data to it, query the database core using it as a limiter and then remove the user core. My expectation was in this scenario that all the permgen associated with that user cores would be freed upon it's unload and allow permgen to reclaim that memory during a garbage collection. This was not the case, it would constantly go up until the application would exhaust the memory.

I also investigated whether the there was a connection between the two cores left behind because I was joining them together in a query but even unloading the database core after unloading all the user cores won't prevent the limit from being hit or any memory to be garbage collected from Solr.

Is this a known issue with creating and unloading a large number of cores? Could it be configuration based for the core? Is there something other than unloading that needs to happen to free the references?

Thanks

Notes: I've tried using tools to determine if it's a leak within Solr such as Plumbr and my activities turned up nothing.

1
What I do not understand is that you say We have the permgen configured to be 512MB but you write that your Java Parameters are -XX:MaxPermSize=64M. That does not fit ...cheffe
That was my bad, I copied the config from a solr instance I was using to run tests against to check garbage collection. The actual deployment that is having problems is 512M and I've updated the question to reflect that. ThanksJoshD

1 Answers

0
votes

This looks like it's an issue with Solr itself. We are putting together a package together containing a test case and hopefully get it fixed in a future release. In the mean time we made the user cores static objects that are created on demand and that seems to mitigate the issue into something where we can reset the instance once a week to not have it occur.