1
votes

I have an Activiti flow that has a User Task section, that looks like this:

Flow Screen

The basic idea is that the "Wait for Notification" user task, waits for a response from special Queue. But actualy a specific code just compets the task:

Map<String, Object> variables = new HashMap<String, Object>();
variables.put("KEY", variable);

String assignee = variable.getAssignee();
processService.completeTask(assignee, variables);

So my problem is that the service responsible for this code can receive several responses form that queue. Which is the kind of behavior I want (the Queue, can return something like Message_Processed and after a few milliseconds return Message_Send). But when the second response comes the Activiti Engine throws an Exception (see cause), and the whole flow dies.

Caused by: org.activiti.engine.ActivitiOptimisticLockingException: Task[id=3867, name=Wait for Notification] was updated by another transaction concurrently

So what I'm looking for: Is there a way to make a task, somehow that it will take and swallow all responses not throwing an Exception.

1

1 Answers

2
votes

I see the following points:

1) You might want to check out the Activiti Receive Task. This is actually a better fit to wait until a signal to proceed arrives: http://www.activiti.org/userguide/index.html#bpmnReceiveTask

2) You probably receive the OptimisticLockingException because your updates come along "too fast" and concurrently. Here I see the options

  • that you configure your queue consumer in a way so that it can receive messages from the queue just in a single threaded exclusive manner. You should then also make sure that your subsequent "Update MessageVO" service task as well as your receive task is not marked to be async=true. This will make sure that you will signal the receive task to proceed and within the same database transaction create a new receive task again ready to be updated by the next message.

  • that you let the queue consumer "crash" with the optimistic locking exception and make sure that your transacted session with the queue producer is rolled back or explicitely asking for redelivery so that the message gets redelivered according to some redelivery policy.

3) Another reason for your OptimisticLockingException might be that you cache the old/updated task instead of querying for it immediately before signalling to proceed. always do that so that you don't "reuse" any outdated task or execution object