1
votes

I have Wildfly 21.0.0 with a jms-queue configured and a couple of in-vm connector/acceptor. Then I have a Message Driven Bean (MDB) with 5 max concurrent sessions than handle messages received and do some dirty work. In some cases the work takes longer than 5 minutes and it happens that the queue redeliver the message to the MDB, causing a mess.

I understood the redelivery concepts, redelivery delay, ..., but I don't find any documentation about how long Artemis ActiveMQ waits before declaring that the message that is in delivering state (waiting the auto-acknowledge at the end of the onMessage method long execution) must be redelivered. From logs I think it waits 5 minutes, then redeliver the message after 2 seconds of redelivery delay.

Is this time configurable?

Thanks!

        <subsystem xmlns="urn:jboss:domain:messaging-activemq:4.0">
            <server name="default">
                <in-vm-connector name="in-vm" server-id="0"/>
                <in-vm-acceptor name="in-vm" server-id="0">
                    <param name="buffer-pooling" value="false"/>
                </in-vm-acceptor>
                <jms-queue name="MyJobsQueue" entries="java:/jms/MyJobsQueue" />
                <pooled-connection-factory name="activemq-ra" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory" connectors="in-vm" transaction="none" pre-acknowledge="true"/>
            </server>
        </subsystem>
@MessageDriven(activationConfig = {
        @ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue"),
        @ActivationConfigProperty(propertyName = "destination", propertyValue = "MyJobsQueue"),
        @ActivationConfigProperty(propertyName = "acknowledgeMode", propertyValue = "Auto-acknowledge"),
        @ActivationConfigProperty(propertyName = "maxSession", propertyValue = "5")
})
public class MyJobsListener implements MessageListener {

    //new logger.....
    
    @Override
    public void onMessage(Message m) {
        try {
            logger.info("Received message ({}) (Redelivered:{})", m.getJMSMessageID(), m.getJMSRedelivered());
            
            Thread.sleep(10 * 60 * 1000);
        } catch (InterruptedException e) {
        }
    }
}
2020-11-19 04:00:00 INFO - Received message (ID:5002ad76-2a13-11eb-bedf-005056b94ad2)  (Redelivered:false)
2020-11-19 04:05:00 INFO - Received message (ID:5002ad76-2a13-11eb-bedf-005056b94ad2)  (Redelivered:true)
2020-11-19 04:10:00 INFO - Received message (ID:5002ad76-2a13-11eb-bedf-005056b94ad2)  (Redelivered:true)
2020-11-19 04:15:00 INFO - Received message (ID:5002ad76-2a13-11eb-bedf-005056b94ad2)  (Redelivered:true)
2020-11-19 04:45:24 INFO - Received message (ID:5002ad76-2a13-11eb-bedf-005056b94ad2)  (Redelivered:true)
1

1 Answers

0
votes

MDBs (just like all other kinds of EJBs) implicitly support JTA transactions and as soon as the MDB receives a message a transaction is started by the container. This is done so that any transactional work (e.g. updating a database, sending another JMS message, etc.) done while processing the message will be a part of the transaction for consuming the message itself. In this way a message can be a "unit of work."

It is therefore important to note that the default transaction timeout in WildFly is 300 seconds (i.e. 5 minutes). Once the transaction times-out the message will be rolled-back onto the queue and potentially redelivered. How redelivery works depends ultimately on the broker's configuration, of course.

In any case, you can prevent this kind of issue for long-running MDBs by disabling JTA transactions with this annotation:

@TransactionAttribute(NOT_SUPPORTED)

If you don't want to disable container-managed transactions for the MDB you can change the default transaction timeout by adding the default-timeout attribute to the coordinator-environment element in <subsystem xmlns="urn:jboss:domain:transactions:5.0">, e.g.:

<coordinator-environment statistics-enabled="${wildfly.transactions.statistics-enabled:${wildfly.statistics-enabled:false}}" default-timeout="600"/>

However, keep in mind that long-running transactions are an anti-pattern so I would discourage you from increasing the timeout.