I have a PL/SQL script that loops over records of people (~4 million) and executes multiple updates (~100) and a single delete statement (all of these updates and delete are on different tables). The problem I am facing is that the one delete statement takes about half the run time by itself. I understand that when you execute a delete statement, it needs to update the index, but I find it rather ridiculous. I am currently testing this script with one thread using dbms_parallel_execute
but I plan to multithread this script.
I am executing a query similar to the following:
DELETE FROM table1 t1
WHERE (t1.key1, t1.key2) IN (SELECT t2.key1, t2.key2
FROM table2 t2
WHERE t2.parm1 = 1234
AND t2.parm2 = 5678).
Following facts:
- Table2 (~30 million records) is ~10 times larger than table1 (~3 million records).
- There is a primary key on table1(key1, key2)
- There is a primary key on table2(key1, key2)
- There is an index on table2(parm1, parm2)
- I have disabled the foreign key constraint on table1(key1, key2) that references table2(key1, key2)
There are no other constraints on table1, but many more constraints on table2.
All triggers on table1 have been disabled
- The explain plan for this query comes up with a cost lower than that of many of my update statements (but I know this doesn't account for much).
Explain plan output:
OPERATION OPTIONS OBJECT_INSTANCE OBJECT_TYPE OPTIMIZER SEARCH_COLUMNS ID PARENT_ID DEPTH POSITION COST CARDINALITY BYTES CPU_COST IO_COST TIME
------------------------------------ ---------------------------------------------------------------------------------------------------- -------------------------------------------- ------------------------------------ ---------------------------------------------------------------------------------------------------- -------------------------------------------- -------------------------------------------- -------------------------------------------- -------------------------------------------- -------------------------------------------- -------------------------------------------- -------------------------------------------- -------------------------------------------- -------------------------------------------- -------------------------------------------- --------------------------------------------
DELETE STATEMENT ALL_ROWS 0 0 5 5 1 36 38043 5 1
DELETE 1 0 1 1
NESTED LOOPS 2 1 2 1 5 1 36 38043 5 1
TABLE ACCESS BY INDEX ROWID 2 TABLE ANALYZED 3 2 3 1 4 1 25 29022 4 1
INDEX RANGE SCAN INDEX ANALYZED 1 4 3 4 1 3 1 21564 3 1
INDEX UNIQUE SCAN INDEX (UNIQUE) ANALYZED 2 5 2 3 2 1 1 11 9021 1 1
I was wondering if there were any way to make this delete go faster. I tried to do a bulk delete
but it didn't seem to improve the run time. If there were any way to execute all the deletes and then update the index after, I suspect it would run faster. Obviously doing a create table from a select is out of the picture since I am looping over records (and running through multiple conditions) from another table to do the delete.
SELECT IOT_TYPE FROM ALL_TABLES WHERE TABLE_NAME = 'MYTABLENAME'
. If the result isn't NULL it's index-organized. - Ed GibbsUNIQUE SCAN
where I was guessing the (param1, param2) index was non-unique. Step 3 is aRANGE SCAN
but these should be using the (t2.key1, t2.key2) pairs to access the primary key of table1, a unique scan by definition. The additional info will help clarify this confusion. It looks like its not unnesting the sub-query. - Unoembre