0
votes

I got a strange problem with my spring webapp (running on local jetty) which connects to a locally running ActiveMQ broker for JMS functionality. As soon as I start the broker the applications becomes incredibly slow, e.g. the startup of the ApplicationContext with active broker takes forever (i.e. > 10mins, did not yet wait long enough for it to complete). If I start the broker after the webapp (i.e. after the ApplicationContext was loaded) it's running but in a very very slow way (requests which usually take <1s take >30s). All operations take longer even the ones without JMS involved. When I run the application without an activemq broker everything runs smoothly (except the JMS related stuff of course ;-) )

Here's what I tried so far:

  1. Updated the ActiveMQ Version to 5.10.1
  2. Used standalone ActiveMQ instead of maven-plugin
  3. moved the broker running from a separate JVM (via active mq maven plugin, connection via JNDI lookup in jetty config) into the same JVM (started via spring config, without JNDI)
  4. changed the active mq transport from tcp to vm
  5. several activemq settings (alwaysSyncSend, alwaysSessionAsync, producerWindowSize)
  6. Using CachingConnectionFactory and PooledConnectionFactory

When analyzing a thread dump (jstack) I see many activemq threads sleeping on a monitor. Which looks like this:

"ActiveMQ VMTransport: vm://localhost#0-3" daemon prio=6 tid=0x000000000b1a3000 nid=0x1840 waiting on condition [0x00000000177df000]
   java.lang.Thread.State: TIMED_WAITING (parking)
    at sun.misc.Unsafe.park(Native Method)
    - parking to wait for  <0x00000000f786d670> (a java.util.concurrent.SynchronousQueue$TransferStack)
    at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
    at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:424)
    at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:323)
    at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:874)
    at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:955)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917)
    at java.lang.Thread.run(Thread.java:662)

Any help is greatly appreciated !

1

1 Answers

0
votes

I found the cause of the issue and was able to fix it: we were passing a transactionmanager to the AbstractMessageListenerContainer. While in production there is a XA-Transactionmanager in use on the local jetty environment only a JPATransactionManager is used. Apparently the JMS is waiting forever for an XA transaction to be commited, which never happens in the local environment. By overriding the bean definition of the AbstractMessageListenerContainer for the local env without setting a transcationmanager but using sessionTransacted="true" instead everything works fine. I got the idea that it might be related to transaction handling from enabling the ActiveMQ logging. With this I saw that something was wrong with the transaction (transactionContext.getTransactionId() returned null).