I'm new to ActiveMQ Artemis and ask community to check if I am right in configuration of HA cluster of brokers or may be I should configure them in another way as I haven't found detailed tutorial on my case. All of the brokers run on the same machine.
The scenario:
There is a master node on 61617
port and two slave nodes (slave1, slave2) on ports 61618
and 61619
. If master node dies, one of slaves become active (replication mode).
It's necessary for the consumer to communicate with cluster as a "black-box". By that I mean that the change of master (i.e. when master dies) shouldn't have any effect on consumer (i.e. the way it connects to the cluster).
What I managed to do (as I understand for this case we should configure only cluster, acceptor, and connector properties, thus I attach only this part of configuration of brokers):
master broker:
<connectors>
<connector name="artemis">tcp://localhost:61617</connector>
</connectors>
<ha-policy>
<replication>
<master/>
</replication>
</ha-policy>
<acceptors>
<acceptor name="artemis">tcp://localhost:61617</acceptor>
</acceptors>
<cluster-user>cluster</cluster-user>
<cluster-password>cluster</cluster-password>
<broadcast-groups>
<broadcast-group name="bg-group1">
<group-address>231.7.7.7</group-address>
<group-port>9876</group-port>
<broadcast-period>5000</broadcast-period>
<connector-ref>artemis</connector-ref>
</broadcast-group>
</broadcast-groups>
<discovery-groups>
<discovery-group name="dg-group1">
<group-address>231.7.7.7</group-address>
<group-port>9876</group-port>
<refresh-timeout>10000</refresh-timeout>
</discovery-group>
</discovery-groups>
<cluster-connections>
<cluster-connection name="my-cluster">
<connector-ref>artemis</connector-ref>
<message-load-balancing>ON_DEMAND</message-load-balancing>
<max-hops>0</max-hops>
<discovery-group-ref discovery-group-name="dg-group1"/>
</cluster-connection>
</cluster-connections>
slave 1 broker the cluster conf is the same with master (auto-configuration when creating a node through the console --clustered)
<ha-policy>
<replication>
<slave/>
</replication>
</ha-policy>
<connectors>
<connector name="artemis">tcp://localhost:61618</connector>
<connector name="netty-live-connector">tcp://localhost:61617</connector>
</connectors>
<acceptors>
<acceptor name="artemis">tcp://localhost:61618 </acceptor>
</acceptors>
slave 2 broker the cluster conf is the same with master (auto-configuration when creating a node through the console --clustered)
<ha-policy>
<replication>
<slave/>
</replication>
</ha-policy>
<connectors>
<connector name="artemis">tcp://localhost:61619</connector>
<connector name="netty-live-connector">tcp://localhost:61617</connector>
</connectors>
<acceptors>
<acceptor name="artemis">tcp://localhost:61619</acceptor>
</acceptors>
JNDI configuration in consumer :
java.naming.factory.initial=org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory
connectionFactory.ConnectionFactory=(tcp://localhost:61617?ha=true&retryInterval=1000&retryIntervalMultiplier=1.0&reconnectAttempts=10,tcp://localhost:61618?ha=true&retryInterval=1000&retryIntervalMultiplier=1.0&reconnectAttempts=10,tcp://localhost:61619?ha=true&retryInterval=1000&retryIntervalMultiplier=1.0&reconnectAttempts=10)
My configuration works, however I don`t sure if it is the right way it should be.
I've also found similar question which uses static connectors. What are they doing? I don't understand how they work. Or may be that is the right way of configuration that I am looking for?