I want to create a trigger in DB2 10.1.2 (LUW running on Linux) that when a field is updated to a certain value, the number of rows in that table are counted with that value to see if it matches a count in another table and then that table is updated (e.g. rolling up task status into a job status). Expressing it seemed pretty easy:
CREATE TRIGGER AUTHORED
AFTER UPDATE OF TASK_STATUS ON TASK_TABLE
REFERENCING NEW AS N FOR EACH ROW WHEN (TASK_STATUS = 'Completed')
update JOB_TABLE set JOB_STATUS='Completed'
where JOB_TABLE.ID = N.JOB_ID
and JOB_TABLE.TOTAL_TASKS = (select count(*) from TASK_TABLE
where TASK_TABLE.JOB_ID = N.JOB_ID
and TASK_TABLE.TASK_STATUS = 'Completed')
Unfortunately, it seems that the context of the trigger and the body of the trigger are not in the same unit of work and a deadlock occurs when you count the row locked from the triggered update itself. Here's the output of "db2pd -wlocks" after I do a triggered update:
Locks being waited on :
AppHandl [nod-index] TranHdl Lockname Type Mode Conv Sts CoorEDU AppName AuthID AppID
44248 [000-44248] 111 0200040E070000000000000052 RowLock ..X G 1385 perl KJPIRES 10.0.15.139.38727.140201011731
14937 [000-14937] 15 0200040E070000000000000052 RowLock .NS W 1238 perl KJPIRES 10.0.15.139.55287.140211231609
I tried to use "with UR" for the internal count, but that is explicitly ignored when I create the trigger ("SQL20159W The isolation clause is ignored because of the statement context. SQLSTATE=01652").
I also tried to use a BEFORE and a INSTEAD OF, but was having problems.
This seems like it would be something common to do. How is it normally handled?
UPDATE
statement that causes that trigger to run. How do you determine that there is a deadlock? – mustaccioJOB_STATUS
be derived on-the-fly on a regular read (say, by defining a view)? At worst, you could define a scheduled call to check on all jobs. Perhaps a stored procedure that commits the task status first? Would specifyingSKIP LOCKED DATA
work here? – Clockwork-Muse