0
votes

I'm trying to configure HA on ActiveMQ Artemis 2.13. I'm trying to start with a simple primary and backup. I've read the documentation on clusters and HA a few times how, but I'm still not sure what I'm doing. I've also studied the replicated-failback java example.

From the client, will I have to specify connection information for both the primary and backup nodes? The example has me confused because it looks like the URLs/connection is passed into the java program via input parameters, and I'm not sure where they come from.

In the console for the primary, everything looks normal, but I now have a "broadcast-groups" and "cluster-connections". The secondary only have these two.

On the primary, for Attribute "Failover on server shutdown" is has false...

Here are the HA configurations I made:

Primary (192.168.56.105) broker.xml:

<configuration xmlns="urn:activemq"
               xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
               xmlns:xi="http://www.w3.org/2001/XInclude"
               xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">

    <core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="urn:activemq:core ">
        <name>0.0.0.0</name>
        <persistence-enabled>true</persistence-enabled>
        <journal-type>NIO</journal-type>
        <paging-directory>data/paging</paging-directory>
        <bindings-directory>data/bindings</bindings-directory>
        <journal-directory>data/journal</journal-directory>
        <large-messages-directory>data/large-messages</large-messages-directory>
        <journal-datasync>true</journal-datasync>
        <journal-min-files>2</journal-min-files>
        <journal-pool-files>10</journal-pool-files>
        <journal-device-block-size>4096</journal-device-block-size>
        <journal-file-size>10M</journal-file-size>
        <journal-buffer-timeout>2884000</journal-buffer-timeout>
        <journal-max-io>1</journal-max-io>
        <disk-scan-period>5000</disk-scan-period>
        <max-disk-usage>90</max-disk-usage>
        <critical-analyzer>true</critical-analyzer>
        <critical-analyzer-timeout>120000</critical-analyzer-timeout>
        <critical-analyzer-check-period>60000</critical-analyzer-check-period>
        <critical-analyzer-policy>HALT</critical-analyzer-policy>      
        <page-sync-timeout>2884000</page-sync-timeout>
        
        <acceptors>
            <!-- Acceptor for every supported protocol -->
            <acceptor name="artemis">tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;amqpMinLargeMessageSize=102400;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true</acceptor>
        
    <!-- AMQP Acceptor.  Listens on default AMQP port for AMQP traffic.-->
         <acceptor name="amqp">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpMinLargeMessageSize=102400;amqpDuplicateDetection=true</acceptor>

         <!-- STOMP Acceptor. -->
         <acceptor name="stomp">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor>

         <!-- HornetQ Compatibility Acceptor.  Enables HornetQ Core and STOMP for legacy HornetQ clients. -->
         <acceptor name="hornetq">tcp://0.0.0.0:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true</acceptor>

         <!-- MQTT Acceptor -->
         <acceptor name="mqtt">tcp://0.0.0.0:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor>

        </acceptors>

        <connectors>
            <connector name="artemis">tcp://192.168.56.105:61616</connector>
        </connectors>

        <broadcast-groups>
            <broadcast-group name="broadcast-group-1">
                <group-address>231.7.7.7</group-address> 
                <group-port>9876</group-port>
                <connector-ref>artemis</connector-ref>
            </broadcast-group>
        </broadcast-groups>

        <discovery-groups>
            <discovery-group name="discovery-group-1">
                <group-address>231.7.7.7</group-address>
                <group-port>9876</group-port>
            </discovery-group>
        </discovery-groups>

        <cluster-user>cluster.user</cluster-user>
        <cluster-password>password</cluster-password>

        <ha-policy>
            <replication>
                <master>
                    <check-for-live-server>true</check-for-live-server>
                </master>
            </replication>
        </ha-policy>

        <cluster-connections>
            <cluster-connection name="cluster-1">
                <connector-ref>artemis</connector-ref>
                <discovery-group-ref discovery-group-name="discovery-group-1"/>
            </cluster-connection> 
        </cluster-connections>

        <security-settings>
            <security-setting match="#">
                <permission type="createNonDurableQueue" roles="amq"/>
                <permission type="deleteNonDurableQueue" roles="amq"/>
                <permission type="createDurableQueue" roles="amq"/>
                <permission type="deleteDurableQueue" roles="amq"/>
                <permission type="createAddress" roles="amq"/>
                <permission type="deleteAddress" roles="amq"/>
                <permission type="consume" roles="amq"/>
                <permission type="browse" roles="amq"/>
                <permission type="send" roles="amq"/>
                <!-- we need this otherwise ./artemis data imp wouldn't work -->
                <permission type="manage" roles="amq"/>
            </security-setting>
        </security-settings>

        <address-settings>
            <!-- if you define auto-create on certain queues, management has to be auto-create -->
            <address-setting match="activemq.management#">
                <dead-letter-address>DLQ</dead-letter-address>
                <expiry-address>ExpiryQueue</expiry-address>
                <redelivery-delay>0</redelivery-delay>
                <!-- with -1 only the global-max-size is in use for limiting -->
                <max-size-bytes>-1</max-size-bytes>
                <message-counter-history-day-limit>10</message-counter-history-day-limit>
                <address-full-policy>PAGE</address-full-policy>
                <auto-create-queues>true</auto-create-queues>
                <auto-create-addresses>true</auto-create-addresses>
                <auto-create-jms-queues>true</auto-create-jms-queues>
                <auto-create-jms-topics>true</auto-create-jms-topics>
            </address-setting>
            <!--default for catch all-->
            <address-setting match="#">
                <dead-letter-address>DLQ</dead-letter-address>
                <expiry-address>ExpiryQueue</expiry-address>
                <redelivery-delay>0</redelivery-delay>
                <!-- with -1 only the global-max-size is in use for limiting -->
                <max-size-bytes>-1</max-size-bytes>
                <message-counter-history-day-limit>10</message-counter-history-day-limit>
                <address-full-policy>PAGE</address-full-policy>
                <auto-create-queues>true</auto-create-queues>
                <auto-create-addresses>true</auto-create-addresses>
                <auto-create-jms-queues>true</auto-create-jms-queues>
                <auto-create-jms-topics>true</auto-create-jms-topics>
            </address-setting>
        </address-settings>
        <addresses>
            <address name="DLQ">
                <anycast>
                    <queue name="DLQ" />
                </anycast>
            </address>
            <address name="ExpiryQueue">
                <anycast>
                    <queue name="ExpiryQueue" />
                </anycast>
            </address>

        </addresses>
    </core>
</configuration>

Backup (192.168.56.106) broker.xml:

<configuration xmlns="urn:activemq"
               xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
               xmlns:xi="http://www.w3.org/2001/XInclude"
               xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">

   <core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="urn:activemq:core ">

      <name>0.0.0.0</name>
      <persistence-enabled>true</persistence-enabled>
      <journal-type>NIO</journal-type>
      <paging-directory>data/paging</paging-directory>
      <bindings-directory>data/bindings</bindings-directory>
      <journal-directory>data/journal</journal-directory>
      <large-messages-directory>data/large-messages</large-messages-directory>
      <journal-datasync>true</journal-datasync>
      <journal-min-files>2</journal-min-files>
      <journal-pool-files>10</journal-pool-files>
      <journal-device-block-size>4096</journal-device-block-size>
      <journal-file-size>10M</journal-file-size>
      <journal-buffer-timeout>2868000</journal-buffer-timeout>
      <journal-max-io>1</journal-max-io>
      <disk-scan-period>5000</disk-scan-period>
      <max-disk-usage>90</max-disk-usage>
      <critical-analyzer>true</critical-analyzer>
      <critical-analyzer-timeout>120000</critical-analyzer-timeout>
      <critical-analyzer-check-period>60000</critical-analyzer-check-period>
      <critical-analyzer-policy>HALT</critical-analyzer-policy>      
      <page-sync-timeout>2868000</page-sync-timeout>

      <acceptors>
         <!-- Acceptor for every supported protocol -->
         <acceptor name="artemis">tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;amqpMinLargeMessageSize=102400;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true</acceptor>
        <!-- AMQP Acceptor.  Listens on default AMQP port for AMQP traffic.-->
         <acceptor name="amqp">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpMinLargeMessageSize=102400;amqpDuplicateDetection=true</acceptor>

         <!-- STOMP Acceptor. -->
         <acceptor name="stomp">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor>

         <!-- HornetQ Compatibility Acceptor.  Enables HornetQ Core and STOMP for legacy HornetQ clients. -->
         <acceptor name="hornetq">tcp://0.0.0.0:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true</acceptor>

         <!-- MQTT Acceptor -->
         <acceptor name="mqtt">tcp://0.0.0.0:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor>

      </acceptors>

      <connectors>
          <connector name="artemis">tcp://192.168.56.106:61616</connector>
      </connectors>

      <security-settings>
         <security-setting match="#">
            <permission type="createNonDurableQueue" roles="amq"/>
            <permission type="deleteNonDurableQueue" roles="amq"/>
            <permission type="createDurableQueue" roles="amq"/>
            <permission type="deleteDurableQueue" roles="amq"/>
            <permission type="createAddress" roles="amq"/>
            <permission type="deleteAddress" roles="amq"/>
            <permission type="consume" roles="amq"/>
            <permission type="browse" roles="amq"/>
            <permission type="send" roles="amq"/>
            <!-- we need this otherwise ./artemis data imp wouldn't work -->
            <permission type="manage" roles="amq"/>
         </security-setting>
      </security-settings>

      <address-settings>
         <!-- if you define auto-create on certain queues, management has to be auto-create -->
         <address-setting match="activemq.management#">
            <dead-letter-address>DLQ</dead-letter-address>
            <expiry-address>ExpiryQueue</expiry-address>
            <redelivery-delay>0</redelivery-delay>
            <!-- with -1 only the global-max-size is in use for limiting -->
            <max-size-bytes>-1</max-size-bytes>
            <message-counter-history-day-limit>10</message-counter-history-day-limit>
            <address-full-policy>PAGE</address-full-policy>
            <auto-create-queues>true</auto-create-queues>
            <auto-create-addresses>true</auto-create-addresses>
            <auto-create-jms-queues>true</auto-create-jms-queues>
            <auto-create-jms-topics>true</auto-create-jms-topics>
         </address-setting>
         <!--default for catch all-->
         <address-setting match="#">
            <dead-letter-address>DLQ</dead-letter-address>
            <expiry-address>ExpiryQueue</expiry-address>
            <redelivery-delay>0</redelivery-delay>
            <!-- with -1 only the global-max-size is in use for limiting -->
            <max-size-bytes>-1</max-size-bytes>
            <message-counter-history-day-limit>10</message-counter-history-day-limit>
            <address-full-policy>PAGE</address-full-policy>
            <auto-create-queues>true</auto-create-queues>
            <auto-create-addresses>true</auto-create-addresses>
            <auto-create-jms-queues>true</auto-create-jms-queues>
            <auto-create-jms-topics>true</auto-create-jms-topics>
         </address-setting>
      </address-settings>

      <addresses>
         <address name="DLQ">
            <anycast>
               <queue name="DLQ" />
            </anycast>
         </address>
         <address name="ExpiryQueue">
            <anycast>
               <queue name="ExpiryQueue" />
            </anycast>
         </address>
      </addresses>

        <broadcast-groups>
        <broadcast-group name="broadcast-group-1">
            <group-address>231.7.7.7</group-address>
            <group-port>9876</group-port>
            <connector-ref>artemis</connector-ref>
        </broadcast-group>
    </broadcast-groups>

    <discovery-groups>
        <discovery-group name="discovery-group-1">
            <group-address>231.7.7.7</group-address>
            <group-port>9876</group-port>
        </discovery-group>
    </discovery-groups>

    <cluster-user>cluster.user</cluster-user>
    <cluster-password>password</cluster-password>

    <ha-policy>
        <replication>
            <slave>
                <allow-failback>true</allow-failback>
            </slave>
        </replication>
    </ha-policy>

    <cluster-connections>
        <cluster-connection name="cluster-1">
            <connector-ref>artemis</connector-ref>
            <discovery-group-ref discovery-group-name="discovery-group-1"/>
        </cluster-connection>
    </cluster-connections>
   </core>
</configuration>

I only have the default addresses in the broker.xml files - the DLQ and ExpiryQueue address and queue.

Also, here is a screenshot of what is displaying on the console. A lot is missing from the backup server.

Primary:

enter image description here

Backup:

enter image description here

1

1 Answers

0
votes

The main thing you need to configure a cluster is a cluster-connection. The cluster-connection references a connector which specifies how other nodes can connect to it. In other words, the connector-ref configured on the cluster-connection should match an acceptor configured on the broker. One problem you have is that both nodes have connectors using 0.0.0.0. This is a meta-address exclusively for use in an acceptor. It will be meaningless for a remote client. Your connectors should the real IP address or hostname of the server where it's running.

Here's what your configuration should probably be:

Primary (192.168.56.105) broker.xml:

<acceptors>    
    <acceptor name="netty-acceptor">tcp://0.0.0.0:61616</acceptor>
</acceptors>

<connectors>
    <connector name="netty-connector">tcp://192.168.56.105:61616</connector>
</connectors>

<broadcast-groups>
    <broadcast-group name="broadcast-group-1">
        <group-address>231.7.7.7</group-address>
        <group-port>9876</group-port>
        <connector-ref>netty-connector</connector-ref>
    </broadcast-group>
</broadcast-groups>

<discovery-groups>
    <discovery-group name="discovery-group-1">
        <group-address>231.7.7.7</group-address>
        <group-port>9876</group-port>
    </discovery-group>
</discovery-groups>

<cluster-user>cluster.user</cluster-user>
<cluster-password>password</cluster-password>

<ha-policy>
    <replication>
        <master>
            <check-for-live-server>true</check-for-live-server>
        </master>
    </replication>
</ha-policy>

<cluster-connections>
    <cluster-connection name="cluster-1">
        <connector-ref>netty-connector</connector-ref>
        <discovery-group-ref discovery-group-name="discovery-group-1"/>
    </cluster-connection>
</cluster-connections>

Backup (192.168.56.106) broker.xml:

<acceptor>
    <acceptor name="netty-acceptor">tcp://0.0.0.0:61616</acceptor>
</acceptors>

<connectors>
    <connector name="netty-connector">tcp://192.168.56.106:61616</connector>
</connectors>

<broadcast-groups>
    <broadcast-group name="broadcast-group-1">
        <group-address>231.7.7.7</group-address>
        <group-port>9876</group-port>
        <connector-ref>netty-connector</connector-ref>
    </broadcast-group>
</broadcast-groups>

<discovery-groups>
    <discovery-group name="discovery-group-1">
        <group-address>231.7.7.7</group-address>
        <group-port>9876</group-port>
    </discovery-group>
</discovery-groups>

<cluster-user>cluster.user</cluster-user>
<cluster-password>password</cluster-password>

<ha-policy>
    <replication>
        <slave>
            <allow-failback>true</allow-failback>
        </slave>
    </replication>
</ha-policy>

<cluster-connections>
    <cluster-connection name="cluster-1">
        <connector-ref>netty-connector</connector-ref>
        <discovery-group-ref discovery-group-name="discovery-group-1"/>
    </cluster-connection>
</cluster-connections>

Clients using any supported protocol can use the acceptor listening on 61616. There's no strict need to add any additional acceptor elements.

It's normal for the backup to display limited information on the web console because the backup is passive while the master is active. Once the master goes down the backup will activate and the web console will be fully functional.