1
votes

I am having some problems with WSO2 ESB Inbound Endpoints. For what I have seen in the documentation, this property

<parameter name="coordination">true</parameter>

forces the execution of the Inbound Endpoint in just one of the worker nodes of a cluster. If the selected node is down, then another worker node will start the Inbound Endpoint

I have a cluster with two worker nodes and one manager node. The cluster is configured following the AWS Mode instructions and it works fine. I have a JMS Inbound Endpoint too, configured this way

<inboundEndpoint name="INB_Q1" onError="ARQ.ERROR" protocol="jms" sequence="INB_Q1_FunINB" suspend="false" xmlns="http://ws.apache.org/ns/synapse">
<parameters>
    <parameter name="interval">100</parameter>
    <parameter name="sequential">true</parameter>
    <parameter name="coordination">true</parameter>
    <parameter name="transport.jms.Destination">INB_Q1</parameter>
    <parameter name="transport.jms.CacheLevel">3</parameter>
    <parameter name="transport.jms.ConnectionFactoryJNDIName">QueueConnectionFactory</parameter>
    <parameter name="java.naming.factory.initial">org.apache.activemq.jndi.ActiveMQInitialContextFactory</parameter>
    <parameter name="java.naming.provider.url">@activemq_failover</parameter>
    <parameter name="transport.jms.SessionAcknowledgement">AUTO_ACKNOWLEDGE</parameter>
    <parameter name="transport.jms.SessionTransacted">false</parameter>
    <parameter name="transport.jms.ConnectionFactoryType">queue</parameter>
    <parameter name="transport.jms.ContentType">application/json</parameter>
    <parameter name="transport.jms.SharedSubscription">false</parameter>
</parameters>

When I start the cluster, everything works fine. Scheduled task for the Polling (JMS) Endpoint starts in just one node, and I see just one consumer in my ActiveMQ queue.

Then I shut down the node executing the task, the cluster gets notified and the scheduled task starts in the remaining active node. It keeps working fine.

Now I restart the node I shut down previously, and here is the problem. This node starts the scheduled task again, the two workers execute the same Inbound Endpoint and I have two consumers for the same queue.

Any idea of why is this happening? Maybe I missed some Task Manager configuration? Could this be a bug?

Thanks.

1

1 Answers

1
votes

In your cluster setup, when you are going to configure database you may find instructions on "Mounting the registry on manager and worker nodes". Under that it may instruct you to add followings for worker nodes:

<remoteInstance url="https://localhost:9443/registry">
    <id>instanceid</id>
    <dbConfig>sharedregistry</dbConfig>
    <readOnly>true</readOnly>
    <enableCache>true</enableCache>
    <registryRoot>/</registryRoot>
    <cacheId>regadmin@jdbc:mysql://carbondb.mysql-wso2.com:3306/REGISTRY_DB?autoReconnect=true</cacheId>
</remoteInstance>

There should be slight change in that. As a result of adding same configurations in both worker nodes, both of them trying to write registry configurations in database. Due to that, Task configurations will be re new and going to start new process with the restart of one node.

So please change following in one node:

<remoteInstance url="https://localhost:9443/registry">
    <id>instanceid</id>
    <dbConfig>sharedregistry</dbConfig>
    <readOnly>false</readOnly>
    <enableCache>true</enableCache>
    <registryRoot>/</registryRoot>
    <cacheId>regadmin@jdbc:mysql://carbondb.mysql-wso2.com:3306/REGISTRY_DB?autoReconnect=true</cacheId>
</remoteInstance>

The change we have done is following:

 <readOnly>false</readOnly>

When you are having more than one worker nodes, you should disable readOnly property in other nodes except one.

This is not a bug and this configuration changes will address your issue.