1
votes

Using Artemis 2.14 set up in a 4 x node cluster and message redistribution is not behaving as I expected - looking for some help to clarify how it should behave, ie whether my config is wrong or if I'm just expecting the system to do something it doesn't!

The Artemis cluster is acting as central messaging hub serving multiple applications. All nodes in the cluster are configured identically. The various client apps are consumers, producers or both, and are normally also clustered and scaled as appropriate.

An example of the problem situation I have is a with a consumer app which only has two nodes and operates one consumer thread per node, so there are only ever 2 consumers on the Artemis queue it uses - i.e. 2 of the 4 artemis nodes (at most) will have consumers. Producer apps send message to the queue, and for various reasons the situation can arise where messages end up on nodes that don't have a consumer E.g. because the producer client side load doesn't seem to "prefer" nodes with consumers (might ask a separate question on this!) or maybe because the consumer application might be down for maintenance or something but the producer apps are still up and sending messages. We have "redistribution-delay" configured for queue (with a value of 600000) and we expected that these messages would automatically be moved after that time to one of the other nodes that does have consumers, but that doesn't seem to be happening.

Having looked back at the documentation I realise it says "...delay in milliseconds after the last consumer is closed on a queue before redistributing messages..". Does this mean that if there were never any consumers on a particular node (since last restart I guess?) then messages arriving onto that node will never get redistributed? If so any advice on how to deal with this situation?

My broker.xml (modified to simplify and anonymize below)

Thanks!

<?xml version='1.0'?>
<configuration xmlns="urn:activemq"
               xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
               xmlns:xi="http://www.w3.org/2001/XInclude"
               xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
    <core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="urn:activemq:core ">

        <name>${ARTEMIS_HOSTNAME}</name>
        <metrics-plugin class-name="org.apache.activemq.artemis.core.server.metrics.plugins.ArtemisPrometheusMetricsPlugin"/>
        <persistence-enabled>true</persistence-enabled>
        <journal-type>NIO</journal-type>
        <paging-directory>data/paging</paging-directory>
        <bindings-directory>data/bindings</bindings-directory>
        <journal-directory>data/journal</journal-directory>
        <large-messages-directory>data/large-messages</large-messages-directory>
        <journal-datasync>true</journal-datasync>
        <journal-min-files>2</journal-min-files>
        <journal-pool-files>10</journal-pool-files>
        <journal-device-block-size>4096</journal-device-block-size>
        <journal-file-size>10M</journal-file-size>
        <journal-buffer-timeout>644000</journal-buffer-timeout>
        <journal-max-io>1</journal-max-io>

        <connectors>
            <!-- Connector used to be announced through cluster connections and notifications -->
            <connector name="artemis1-${ENV}-connector">tcp://artemis1-${ENV}:61616</connector>
            <connector name="artemis2-${ENV}-connector">tcp://artemis2-${ENV}:61616</connector>
            <connector name="artemis3-${ENV}-connector">tcp://artemis3-${ENV}:61616</connector>
            <connector name="artemis4-${ENV}-connector">tcp://artemis4-${ENV}:61616</connector>
        </connectors>

        <disk-scan-period>5000</disk-scan-period>
        <max-disk-usage>90</max-disk-usage>
        <critical-analyzer>true</critical-analyzer>
        <critical-analyzer-timeout>120000</critical-analyzer-timeout>
        <critical-analyzer-check-period>60000</critical-analyzer-check-period>
        <critical-analyzer-policy>HALT</critical-analyzer-policy>
        <page-sync-timeout>644000</page-sync-timeout>

        <acceptors>
            <acceptor name="artemis-clients">tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true</acceptor>
        </acceptors>

        <cluster-user>cluster-user</cluster-user>
        <cluster-password>XXXXXXXXXXXX</cluster-password>

        <cluster-connections>
            <cluster-connection name="artemis-cluster-${ENV}">
                <address></address>
                <connector-ref>${ARTEMIS_HOSTNAME}-connector</connector-ref>
                <retry-interval>500</retry-interval>
                <use-duplicate-detection>true</use-duplicate-detection>
                <message-load-balancing>ON_DEMAND</message-load-balancing>
                <max-hops>1</max-hops>
                <static-connectors allow-direct-connections-only="true">
                    <connector-ref>artemis1-${ENV}-connector</connector-ref>
                    <connector-ref>artemis2-${ENV}-connector</connector-ref>
                    <connector-ref>artemis3-${ENV}-connector</connector-ref>
                    <connector-ref>artemis4-${ENV}-connector</connector-ref>                
                </static-connectors>
            </cluster-connection>
        </cluster-connections>

        <address-settings>
            <!-- if you define auto-create on certain queues, management has to be auto-create -->
            <address-setting match="activemq.management#">
                <dead-letter-address>DLQ</dead-letter-address>
                <expiry-address>ExpiryQueue</expiry-address>
                <redelivery-delay>0</redelivery-delay>
                <!-- with -1 only the global-max-size is in use for limiting -->
                <max-size-bytes>-1</max-size-bytes>
                <message-counter-history-day-limit>10</message-counter-history-day-limit>
                <address-full-policy>PAGE</address-full-policy>
                <auto-create-queues>true</auto-create-queues>
                <auto-create-addresses>true</auto-create-addresses>
                <auto-create-jms-queues>true</auto-create-jms-queues>
                <auto-create-jms-topics>true</auto-create-jms-topics>
                <config-delete-addresses>FORCE</config-delete-addresses>
                <config-delete-queues>FORCE</config-delete-queues>
            </address-setting>
            <address-setting match="my.organisation.#"> <!-- standard settings for all queues --> 
                <!-- error queues automatically created based on these params -->
                <dead-letter-address>ERROR_MESSAGES</dead-letter-address> 
                <auto-create-expiry-resources>true</auto-create-expiry-resources>
                <auto-create-dead-letter-resources>true</auto-create-dead-letter-resources>
                <dead-letter-queue-prefix></dead-letter-queue-prefix> <!-- override the default -->
                <dead-letter-queue-suffix>_error</dead-letter-queue-suffix>
                <!-- redelivery & redistribution settings -->
                <redelivery-delay>600000</redelivery-delay> 
                <max-delivery-attempts>9</max-delivery-attempts> 
                <redistribution-delay>600000</redistribution-delay> 
                <max-size-bytes>-1</max-size-bytes>
                <message-counter-history-day-limit>10</message-counter-history-day-limit>
                <address-full-policy>PAGE</address-full-policy>
                <auto-create-queues>true</auto-create-queues>
                <auto-create-addresses>false</auto-create-addresses>
                <auto-create-jms-queues>false</auto-create-jms-queues>
                <auto-create-jms-topics>false</auto-create-jms-topics>
                <config-delete-addresses>FORCE</config-delete-addresses>
                <config-delete-queues>FORCE</config-delete-queues>
            </address-setting>
        </address-settings>

        <addresses>
            <address name="my.organisation.app1.jms.queue"><anycast><queue name="my.organisation.app1.jms.queue" /></anycast></address>
            <address name="my.organisation.app2.jms.queue.input"><anycast><queue name="my.organisation.app2.jms.queue.input" /></anycast></address>
            <address name="my.organisation.app3.jms.queue.input"><anycast><queue name="my.organisation.app3.jms.queue.input" /></anycast></address>
        </addresses>
        <security-settings>
            <security-setting match="#">
                <permission type="createNonDurableQueue" roles="amq"/>
                <permission type="deleteNonDurableQueue" roles="amq"/>
                <permission type="createDurableQueue" roles="amq"/>
                <permission type="deleteDurableQueue" roles="amq"/>
                <permission type="createAddress" roles="amq"/>
                <permission type="deleteAddress" roles="amq"/>
                <permission type="consume" roles="amq"/>
                <permission type="browse" roles="amq"/>
                <permission type="send" roles="amq"/>
                <!-- we need this otherwise ./artemis data imp wouldn't work -->
                <permission type="manage" roles="amq"/>
            </security-setting>
            <security-setting match="my.organisation.app1.#">
                <permission type="consume" roles="app1_role"/>
                <permission type="browse" roles="app1_role"/>
                <permission type="send" roles="app1_role"/>
            </security-setting>
            <security-setting match="my.organisation.app2.#">
                <permission type="consume" roles="app2_role"/>
                <permission type="browse" roles="app2_role"/>
                <permission type="send" roles="app2_role"/>
            </security-setting>
            <security-setting match="my.organisation.app3.#">
                <permission type="consume" roles="app3_role"/>
                <permission type="browse" roles="app3_role"/>
                <permission type="send" roles="app3_role"/>
            </security-setting>

        </security-settings>
    </core>
</configuration>
1
Are any of your consumers using selectors? - Justin Bertram
Can you paste your broker.xml? - Justin Bertram
Why do you need a cluster of 4 nodes with just a handful of clients? It sounds like your load could be handled by a single broker. Keep in mind that a single node can potentially handle millions of messages per second. - Justin Bertram
Why is your redistribution-delay so high (i.e. 600000)? - Justin Bertram
Hi @JustinBertram thanks for the response. To you questions: (1) no selectors in use on any consumers. (2) broker.xml added to question now (3) clustering is for HA and also because we are building a system that is going to be scaled up to 100's of clients over time. (4) redistribtion-delay of 10 mins seems reasonable our use case, the systems we work with typically aim to process within the hour, its not real time stuff. - AdrianF

1 Answers

0
votes

With your configuration if a message is sent to a node without a consumer then it should be automatically forwarded to a node that does have a consumer. The documentation calls this "initial distribution." What's termed "redistribution" only comes into play for messages which arrived on a broker while consumers were present and subsequently disconnected.

If you think you're hitting a bug then work up a test-case that reproduces the issue and open a JIRA.