3
votes

We have an application that connects to postgre database using Hibernate. We use C3P0 as connection pool.

persistence.xml:

<persistence-unit name="tv-europe-core" transaction-type="RESOURCE_LOCAL"> <provider>org.hibernate.ejb.HibernatePersistence</provider>

---classes---

<properties> <property name="hibernate.dialect" value="org.hibernate.dialect.PostgreSQLDialect" />
<property name="hibernate.connection.password" value="---password---" />
<property name="hibernate.connection.url" value="---database---" />
<property name="hibernate.connection.username" value="---username---" />

<property name="hibernate.connection.driver_class" value="org.postgresql.Driver" />
<property name="hibernate.connection.release_mode" value="after_statement" />
<property name="hibernate.connection.autocommit" value="false" />

<property name="hibernate.c3p0.minPoolSize" value="5"/>
<property name="hibernate.c3p0.maxPoolSize" value="60"/>
<property name="hibernate.c3p0.maxIdleTime" value="10"/>
<property name="hibernate.c3p0.idleConnectionTestPeriod" value="5"/>
<property name="hibernate.c3p0.testConnectionOnCheckin" value="true"/>
</properties>
</persistence-unit>

Saving object:

public Entity saveOrUpdate(Entity entity, User u) {  
EntityTransaction tx = EntityManagerHelper.getEntityManager().getTransaction();  
try {  
  if(!tx.isActive())  
      tx.begin();  
          Entity result = null;  
      if (getID(entity) == null) {  
      EntityManagerHelper.getEntityManager().persist(entity);  
  } else {  
      result = EntityManagerHelper.getEntityManager().merge(entity);  
  }  
  tx.commit();    
  return result;  
  } catch (RuntimeException re) {  
      re.printStackTrace();  
      tx.rollback();  
      throw re;  
  }  
}  

Loading objects:

@SuppressWarnings("unchecked")
    public List<Entity> findByProperty(String propertyName, final Object value,
            final int... rowStartIdxAndCount) {

        try {
            final String queryString = "select model from " + clazz.getName()
                    + " model where model." + propertyName + "=     :propertyValue";
            Query query = EntityManagerHelper.getEntityManager().createQuery(
                    queryString);
            query.setParameter("propertyValue", value);
            if (rowStartIdxAndCount != null && rowStartIdxAndCount.length > 0)     {
                int rowStartIdx = Math.max(0, rowStartIdxAndCount[0]);
                if (rowStartIdx > 0) {
                    query.setFirstResult(rowStartIdx);
                }

                if (rowStartIdxAndCount.length > 1) {
                    int rowCount = Math.max(0, rowStartIdxAndCount[1]);
                    if (rowCount > 0) {
                        query.setMaxResults(rowCount);
                    }
                }
            }
            final List<Entity> result = query.getResultList();
            return result;
        } catch (RuntimeException re) {
            re.printStackTrace();
            throw re;
        }
    }  

Creating EntityManagerFactory and getting EntityManager:

private static EntityManagerFactory emf;
private static final ThreadLocal<EntityManager> threadLocal = new ThreadLocal<EntityManager>();  

public static EntityManager getEntityManager() throws HibernateException {
    EntityManager session = (EntityManager) threadLocal.get();

    if (session == null || !session.isOpen()) {
        session = (emf != null) ? emf.createEntityManager()
                : null;
        threadLocal.set(session);
    }

    return session;
}

The problem is, that time after time a database connection stays in "idle in transaction" state and after that this connection is never returned. After several days the application stops responding because the number of connections exceeds the pool max size.
When hibernate hibernate.connection.autocommit is enabled, those connections do not become "idle in transaction", but they are still somehow blocked and the resulting problem is the same.

Are we doing something wrong (some missing configuration etc..)?

I have noticed that when I use eager loading only, there is no problem. But I have to use lazy loading because of performance. Should the EntityManagerFactory be explicitly closed? I hope not, because we need the app to work for a very long time and I suspect I cannot just reset it when somebody is working with persistent objects.

In the log I see the following, but I dont know if it somehow relates to our problem:

java.lang.Exception: DEBUG -- CLOSE BY CLIENT STACK TRACE
    at com.mchange.v2.c3p0.impl.NewPooledConnection.close(NewPooledConnection.java:491)
    at com.mchange.v2.c3p0.impl.NewPooledConnection.close(NewPooledConnection.java:191)
    at     com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool$1PooledConnectionResourcePoolManager.dest    royResource(C3P0PooledConnectionPool.java:470)
    at     com.mchange.v2.resourcepool.BasicResourcePool$1DestroyResourceTask.run(BasicResourcePool.ja    va:964)
    at     com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunn    er.java:547)  

Thanks for any help! :)

3

3 Answers

4
votes

Your configuration don't looks correct to me:

  1. transaction-type="RESOURCE_LOCAL" means your are not in a JTA environment.
  2. property name="hibernate.connection.release_mode" value="after_statement" combined with auto-commit = false is very uncommon.

"after_statement" can be used only if your connection provider support aggressive release (and is able to return the same connection for each statement in the same transaction).

It's possible that AFTER_STATEMENT will be ignored by hibernate (because hibernate detect that this release mode is not compatible with your setup) and AFTER_TRANSACTION will be used instead... but just to be sure that you don't use it wrongly you put

 <property name="hibernate.connection.release_mode" value="auto" />

This will setup the AFTER_TRANSACTION release mode in a non-JTA environment.

I'm not sure that this will fix your problem (because there are some chances that you are already running in after_transaction mode). (if it don't fix it, please comment and a deeper investigation will be required).

EDIT

BTW, rebuilding sessionFactory seems very strange. A SessionFactory is usually a singleton and the major purpose of the sessionFactory is to build new Session very efficiently. In general, there is no reason to re-create it (it's time consuming and useless since the sessionFactory only depends on static data).

The only reason I can see to re-create the sessionFactory is if your application change the data-model at runtime -i.e. create new table or column- (and those changes will be seen by a new session factory only if at the same time you app modify you mapping files or byte code - to add new annotations, fields and classes- ). I assume you are not doing that.

EDIT 2

As I said in previous edit : avoid rebuilding sessionFactory. Make this method private so that you are sure it isn't called more than once. If your code is re-building the sessionFactory it may be the cause of your problem since a new SessionFactory will probably consumme some connections -due to the associated C3PO setup).

Other point: you said that when disabling lazy loading : no problem anymore. So the problem may also be caused by sessions created for lazy-loading and not closed properly. Try to debug lazy-loading operations to see where the session comes from, and if it is closed or not.


EDIT 3 (as a reply to your last comment)

You are facing a very common architectural design problem, and let's say that there are 2 approachs to solve it. The good one and the (very) bad one.

The (very) bad one : using the open session in view pattern.

The idea is to re-open an entityManager and re-attach your entities when generating the view so that you don't get lazy-init exception. In the short term, this approach will give you the wrong feeling that your app is running well. Once it will be in production with many concurrent users and/or more and more records in your database: you have huge risks of having real big performance problems (either regarding memory usage and/or response time).

(the root cause of those potential problems is that you won't notice during your dev on a little DB that this or that view is fetching a lazy-initialized collection with 10 objects... but in production, your small collection will become huge with 10000 objects !!!)

Those problems will be difficult to fix since: - they will be present in multiple views - they will be difficult to unit/load test (because they are in the view layer).

In my opinion, this approach can only be used for little-non-critical-apps that will never have huge load or big amount of data.

The good one : using a layered architecture.

The view layer don't touch the entity manager. This layer receive data to display from the controller layer, all the data is there : no need to fetch lazy collections here.

The controller layer has 2 roles :

  • implementing your business logic
  • managing entityManager life-cycle (and transaction boundaries) and providing an entityManager that can be used by the DAO layer.

Additionaly, the controller layer must provide a complete object graph to the view layer. Complete object graph means that the view layer won't receive a non-initialized lazy collection if the view needs to display data from that collection.

The DAO layer :

Simply execute queries to fetch data (that's where you write your JPA/HQL/SQL queries). This layer don't do anything special with the entityManager except using the entityManager provided by the controller layer.

The DAO layer must provide a wide range of queries to fetch this entity with or without it's lazy-collections so that all needs of the controller layer are met.

The major advantage of the layered architecture approach is that you see very soon in the development process the requirements of your views and you will be able to adapt and optimize your queries when required. (i.e. you need to fix all your lazy-init exception one by one, but it will give you a good vision of your view's requirements)

0
votes

I also modified my "select method" - I enclosed it within a transaction (like in my save method) and there is a change in behavior: like when hibernate.autocommit was enabled, connections do not end in "idle in transaction" state, but they are still blocked and their number grows. They are not being destroyed by hibernate.c3p0.maxIdleTime setting (other connections are). When I also close the Entitymanager after each select, I later get the LazyInitializationException. Is it possible that those connections are somehow reserved for the lazily loaded objects?

0
votes

Change your hibernate.connection.release_mode property value to after_transaction. I think that should fix your problem