0
votes

I have a java program 200 concurrent consumers reading from a queue provided by a standalone HornetQ server. The listeners just pick up an item from the queue, wait randomly for 1.5 - 2.5 seconds and acknowledge back to the queue (it is CLIENT_ACKNOWLEDGE).

Now I create 20.000 messages in the queue, start these 200 consumers and after 5 seconds I call close (tried with stop too) method on the Connection. By this time the consumers would have processed around 1000 messages. But instead of finishing the current work and not receiving any more from the queue, they take another 3 minutes to process another about 10.000 messages before they finally stop and application is ended (the connection.close() is a blocking call).

I suspect this might be due to some kind of buffer on the client side and I've been looking for ways to limit it and have set up these four properties to as restrictive as possible in the factory configuration:

<producer-window-size>1</producer-window-size>
<consumer-window-size>0</consumer-window-size>
<consumer-max-rate>1</consumer-max-rate>
<producer-max-rate>1</producer-max-rate>

I understand I don't necessarily need all of these for my case but just wanted to try everything. I know these are being registered because once setting the consumer windows size to zero another problem I had, with order of messages consumed, has been resolved.

1

1 Answers

0
votes

We have changed our code to interrupt the communication upstream as per these commits here:

HORNETQ-1379 & https://bugzilla.redhat.com/show_bug.cgi?id=1125042 - forcing clients out when the server is stuck on delivery over OIO

The commit may give you an idea of what was changed:

https://github.com/hornetq/hornetq/commit/4c05475

Basically we forceClose on the netty connection now at the time the connection was closed what should interrupt the communication with any standing consumers at the moment you close the connection.

I don't think this commit was done on the 2.2 branch.