2
votes

I am using Hibernate 3.6.3 Final and Hibernate Search 3.4.1. I wrote an HQL delete query. The objects are deleted from the database but they are not removed from the Lucene Index after the transaction completes. Here is the query:

Session session = factory.getCurrentSession();
Query q = session.createQuery("delete from Charges cg where cg.id in (:charges)");
q.setParameterList("charges", chargeIds);
int result = q.executeUpdate();`

What am I missing? What do I need to do to solve issue?

I created a PostDeleteEvent, hoever the FullTextEventListener doesn't appear to be receiving the event:

 SessionImpl sessImpl = (SessionImpl) factory.getCurrentSession();
 SessionImplementor implementor = sessImpl.getPersistenceContext().getSession();
 EntityPersister persister = implementor.getEntityPersister("Charges", cg);
 EntityEntry entry = sessImpl.getPersistenceContext().getEntry(cg);

 Object[] deletedState = new Object[]
 { cg};
 entry.setDeletedState(deletedState);
 PostDeleteEvent pdEvent = new PostDeleteEvent(entry, entry.getId(), deletedState,
                    entry.getPersister(), (EventSource) sessImpl);`

Thank you.

1

1 Answers

1
votes

This is an expected limitation, documented in the Hibernate Search reference.

HQL update statements are interpreted to

  • Generate the batch SQL to perform the operation
  • Invalidate any relevant cache (if using any second level cache)
  • See if pending operations need to be flushed to the database before executing the query

But it's not going to load all potential matches in memory from the database! That would kill performance.

Still Lucene requires the elements in memory, so indeed this is a design limitation and is expected: you should not run mass-update statements on indexed types, but rather iterate on them in memory and apply changes on the entities in a loop.

Loading all entities will be slow as it will need to materialize all data in memory, but that's required to feed Lucene anyway; a good second level cache configuration usually does the trick, or just start the MassIndexer to re-synch it all if changes are massive.