3
votes

I have a class 'Company' that is being mapped using NHibernate and cached in the second-level cache (memcached). Our team recently added a new bool property to this class which will be stored in the database.

Everything worked fine in our development environment, but as soon as we deployed to our staging environment (which shares the live database) we started getting the following error:

System.IndexOutOfRangeException: Index was outside the bounds of the array.
at (Object , Object[] , SetterCallback )
at NHibernate.Tuple.Entity.PocoEntityTuplizer.SetPropertyValuesWithOptimizer(Object entity, Object[] values)
at NHibernate.Tuple.Entity.PocoEntityTuplizer.SetPropertyValues(Object entity, Object[] values)
at NHibernate.Persister.Entity.AbstractEntityPersister.SetPropertyValues(Object obj, Object[] values, EntityMode entityMode)
at NHibernate.Cache.Entry.CacheEntry.Assemble(Object[] values, Object result, Object id, IEntityPersister persister, IInterceptor interceptor, ISessionImplementor session)
at NHibernate.Cache.Entry.CacheEntry.Assemble(Object instance, Object id, IEntityPersister persister, IInterceptor interceptor, ISessionImplementor session)
at NHibernate.Event.Default.DefaultLoadEventListener.AssembleCacheEntry(CacheEntry entry, Object id, IEntityPersister persister, LoadEvent event)
at NHibernate.Event.Default.DefaultLoadEventListener.LoadFromSecondLevelCache(LoadEvent event, IEntityPersister persister, LoadType options)
at NHibernate.Event.Default.DefaultLoadEventListener.DoLoad(LoadEvent event, IEntityPersister persister, EntityKey keyToLoad, LoadType options)

My best guess is that NHibernate can't deserialize the old cache entries (which don't have the new property) into the new Company object. I believe I confirmed this, because I disabled the second-level cache in our staging environment and the ISEs stopped.

So I guess my question is how can we force NHibernate to fall through to the database if it can't deserialize a cache entry instead of bubbling up an exception? Has anyone else run into this problem?

I think right now, we're going to have to deploy with second-level caching turned off, restart the memcached servers and then re-enable second-level caching. However, this solution is not ideal. If anyone has a better suggestion I'd be very thankful.

3
We restart our memcached servers if we change the schema - works for us.Martin Ernst

3 Answers

0
votes

As an update for anyone who is interested, I followed the steps outlined in my post:

[...] we're going to have to deploy with second-level caching turned off, restart the memcached servers and then re-enable second-level caching.

...and everything worked ok. It was bit more complicated than our normal deploys, but we didn't have any errors.

0
votes

doing all that shouldn't be necessary - the turning off second level cache and restarting and all that. i think all you are really after is invalidating the items in your cache to make nhibernate go to the database.

if found a gem relating to memcached - you can invalidate the entire cache by flushing it from the telnet interface.

telnet SomeServerInCluster 11211
flush_all

and you've accomplished what you wanted to accomplish by restarting all the memcached machines.

http://www.lzone.de/articles/memcached.htm

0
votes

Are you guys utilizing cache region prefixes? What I have implemented is that anytime we have core definition changes, we write to a different cache region prefix. This is similar to flushing the cache region.

The upside is that you won't run into issues where you get mismatches of the new definition and the old stale definitions that are cached.

The down side is that this can slow down performance since the cache will now need to be rebuilt.

To alleviate the performance, you can always include (as part of the deployment process) a cache warmer routine that primes the cache before you see any heavy traffic to the applications.

Hope this helps.