1
votes

I'm looking for advice on a JMS-based architecture...

My application needs to receive JMS messages on behalf of thousands of different destinations, and then deliver to the destinations via non-JMS protocols (i.e. this is a gateway). Allowable solutions are for all messages to originally by sent to one JMS queue, or to go to one queue per destination.

Solutions need to perform well with this large number of destinations (and many messages per second).

The requirements are:

  1. During the time a message is being delivered to one destination, no other message may be processed for that destination.
  2. Messages must be delivered FIFO per destination based on when they were sent into JMS
  3. None may be lost (JMS transaction semantics are adequate)
  4. Deliveries must take place in parallel to multiple destinations (except no parallelism per destination)
  5. There are several identical instances of the application, on different machines, that implement this, all running at once. They can communicate via a shared cache or JMS, but communication should be simple and minimal.
  6. The gateway will reside in a J2EE container, but is not required to use MDB's

Thanks in advance

2

2 Answers

1
votes

It sounds like you would be able to use one queue per destination to deliver messages from the different publishers to the gateway. The gateway would then need to be multi-threaded, with one thread per queue consumer. So, for x number of producers publishing to n destinations, the gateway will need n threads, one per destination. This architecture will provide you with throughput that is governed by how much processing the gateway has to do with a message before it forwards it on to its final destination, and how long it takes for a message to be processed by the final destination before the gateway can send the following message.

This design has 2 downsides:

  1. Your application(s) will have a single point of failure- the gateway. You will not be able to load-balance it because the order of consumption is important to you, so you don't want 2 gateways draining the same queue.
  2. Each queue can potentially become a bottleneck, clogging messages that are not being processed quickly enough.

If you have control over the publishers, wouldn't you prefer to transport the messages directly from the publishers to the final destinations using the destinations protocol of choice without going through the gateway (which seems to serve no purpose other than being a performance bottleneck and a single point of failure)? If you are able to achieve this, your next task is to teach the final destinations to multi-process requests, relaxing the order constraint if possible (requirement #2).

Another choice you have is to do batch processing. At any given point in time, a consumer drains all available messages on the queue and processes them in a batch at once. This means that you'd have to do synchronous message consumption (Consumer#receive()), as opposed to asynchronous consumption with an onMessage.

1
votes

@Mesocyclone: Based on inputs from your question and solution provided by Moe above, this is what I can recommend a possible solution you are looking for.

You can introduce one queue per destination internally in your gateway application viz. example dest1queue, dest2queue so on and have only one input queue exposed to receive message. You can have one thread of MDB listening to each of these internal queues deployed on different server. For e.g. dest1queue is listened by MDB(single thread) on server1, dest2queue is listened by MDB(single thread) on server2,dest3queue is listened by MDB(single thread) on server3...

So basically flow would be:-

Single Input Queue exposed outside of gateway application -> message is received by 1 or multiple instances of a MDB whose only purpose is to route the incoming message to internal queue -> internal queue(one per destination) is listened by only 1 thread of MDB(as you don't require parallelism for one destination) which process the message and talk to destination.

Benefit of above design would be:-

  1. You can have that each internal queue listened by MDB thread deployed on different servers so that each MDB thread gets maximum processing time.
  2. At any point of time, you can change number of threads listening for one destination without affecting others.
  3. However, above design requires you to have back up MDB server for each of internal queues to avoid SPOF. May be your server in which you deploy application provides some sort of fail over capability.