3
votes

We're running ActiveMQ 5.6.0. We have 3 brokers operating in a static network in our test environment. Here's the current scenario. We have 6 consumers randomly connecting to the 3 brokers. One broker has 3 consumers, the second has 2, the 3rd has 1. When we pile on message to the queue, we're seeing that messages are backlogging on the 3rd broker with 1 consumer, the other two brokers aren't given any of the backlog and the remaining 5 consumers are idle.

Below you'll find our configuration for all one of our brokers (dev.queue01), the other 2 are similar with the proper changes for the static hostnames.

I would expect that messages would be automatically distributed to the other brokers for consumption by the idle consumers. Please tell me if I've missed something in my description of the problem. Thanks in advance for any guidance.

http://www.springframework.org/schema/beans/spring-beans-2.0.xsd http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">

<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
    <property name="locations">
        <value>file:${activemq.conf}/credentials.properties</value>
    </property>
</bean>

<broker xmlns="http://activemq.apache.org/schema/core" brokerName="prd.queue01" dataDirectory="${activemq.data}">

    <destinationPolicy>
        <policyMap>
          <policyEntries>
            <policyEntry topic=">" producerFlowControl="false" memoryLimit="1mb"> 
              <pendingSubscriberPolicy>
                <vmCursor />
              </pendingSubscriberPolicy>
            </policyEntry>
            <policyEntry queue=">" producerFlowControl="false" memoryLimit="64mb" optimizedDispatch="true" enableAudit="false" prioritizedMessages="true"> 
              <networkBridgeFilterFactory>
                <conditionalNetworkBridgeFilterFactory replayWhenNoConsumers="true" />
              </networkBridgeFilterFactory>
            </policyEntry>
          </policyEntries>
        </policyMap>
    </destinationPolicy>

    <managementContext>
        <managementContext createConnector="true"/>
    </managementContext>

    <persistenceAdapter>
        <amqPersistenceAdapter directory="${activemq.data}/data/amqdb"/>
    </persistenceAdapter>

      <systemUsage>
        <systemUsage>
            <memoryUsage>
                <memoryUsage limit="256 mb"/>
            </memoryUsage>
            <storeUsage>
                <storeUsage limit="750 gb"/>
            </storeUsage>
            <tempUsage>
                <tempUsage limit="750 gb"/>
            </tempUsage>
        </systemUsage>
    </systemUsage>
    <transportConnectors>
        <transportConnector name="openwire" uri="tcp://0.0.0.0:61616" updateClusterClients="true" updateClusterClientsOnRemove="true" rebalanceClusterClients="true"/>
    </transportConnectors>

    <networkConnectors>
      <networkConnector uri="static:(tcp://dev.queue02:61616,tcp://dev.queue03:61616)" name="queues_only" conduitSubscriptions="false" decreaseNetworkConsumerPriority="false" networkTTL="4">
      <dynamicallyIncludedDestinations>
        <queue physicalName=">"/> 
      </dynamicallyIncludedDestinations>
      <excludedDestinations>
        <topic physicalName=">"/> 
      </excludedDestinations>
    </networkConnector>
</networkConnectors>


</broker>
<import resource="jetty.xml"/>

3

3 Answers

3
votes

Late answer, but hopefully it might help future readers.

You've described a network ring of brokers, where B1, B2, and B3 all talk to one another, with 3 consumers (C1-C3) on B1, 2 consumers (C4 & C5) on B2, and 1 consumer (C6) on B3. You didn't describe where your messages are being produced (which broker they go to first), but let's say it's B3. (B3 will produce the worst-case scenario that most accurately matches your description, though you'll still see uneven load no matter where the message is produced.)

B3 has three attached consumers: C6, B1, and B2. That broker will round-robin messages across those consumers, so 1/3 of the messages will go to C6, 1/3 to B1, and 1/3 to B2.

B1 has five attached consumers: C1, C2, C3, B2, and B3. But messages won't be delivered to the same broker they just came from, so there are 4 consumers that count for the messages from B3: C1, C2, C3, and B2. So of the 1/3 of the total messages, C1, C2, and C3 will each get 1/4 (1/12 of the total), and B2 will get the same 1/12 of the total. More on that in a second.

B2 has four attached consumers: C4, C5, B1, and B3. But messages won't be delivered to the same broker they just came from, so there are 3 consumers that count for the messages from B3: C4, C5, and B1. So of the 1/3 of the total messages, C4 and C5 will each get 1/3 (1/9 of the total), and B1 will get the same 1/9 of the total. More on that in a second, too.

So far we've seen C6 get 1/3 of the total messages, C1-C3 get 1/12 of the total messages, C4-C5 get 1/9 of the total messages, and 1/12 + 1/9 = 7/36 of the total messages routed on to a second broker. Let's return to those messages now.

Of the messages that have followed the B3 -> B1 -> B2 path (1/12 of the total), they will get round-robined across C4 and C5 (because messages can't go back to their original broker B3), for an additional 1/24 of the total messages each. So C4 and C5 will have received 1/9 + 1/24 = 11/72 of the total.

Similarly, of the messages that have followed the B3 -> B2 -> B1 path (1/9 of the total), they will get round-robined across C1, C2, and C3, so C1, C2, and C3 will have received 1/12 + 1/27 = 13/108 of the total.

Of the messages that have followed the B3 -> B1 -> B2 -> B3 path (1/36 of the total), half go to C6 (1/72 of the total), and half go to B1 (1/72 of the total). Similarly, of the messages that have followed the B3 -> B2 -> B1 -> B3 path (1/36 of the total), half go to C6 (1/72 of the total), and half go to B2 (1/72 of the total). So C6 gets 1/36 of the messages (totaling 13/36), B1 gets 1/72 of the total, and B2 gets 1/72 of the total.

We're getting into diminishing returns now, but you can see by now that C6 gets an outsized share (36%) of the total messages, while the consumers that are connected to B1 (which has the most consumers) each get an undersized share (under 10%), resulting in C6 having lots of work to do and C1-5 having far less work and spending time idle as you observed. You can also see how some messages might take a long path through the network resulting in high latency, but that's not what your question was about.

1
votes

A far fetch, as i'm not really sure but in your config you have all topics excluded

<excludedDestinations>
    <topic physicalName=">"/> 
</excludedDestinations>

Can you remove that restriction for testing. Activemq uses advisory topics to communicate when clients connect to a specific queue/topic. So its possible your 3th broker does not know about the other clients since you blocked the advisory topics.

1
votes

If I understood you correctly, broker means queue here.

  • All your brokers have same type of objects.
  • All yours consumers do same kind of process.
  • And you want to equally share workload between your consumers.
  • Sequence of operation is not of that much importance.

I tried to do same thing on Active MQ 5.5.1. All I did was created one Queue, and created multiple consumers. I pointed all consumers to same queue.

Active-MQ automatically took care of distribution.

I observed following Example:

If I have a Queue - having 2000 records. If I point 2 consumers to this queue at the same time, 1st consumer will process objects starting from 0. Second consumer will start processing objects after a random offset (say from 700 .)

Once, 1st consumer has completed processing objects from 0 - 700 and 2nd consumer has processed 200 records (700 - 900), the 1st consumer may start getting objects from any random offset(may be 1200 ).

The adjustment of offset was controlled by ActiveMQ automatically.

I had observed this. I am very much sure that this happens.

Hope I have answered your question(or at-least understood your problem correctly.).

What I did not understand here was , If Active-MQ creates QUEUES, how did it serve Objects from somewhere in between ?