I agree with Tom Kyte about full table scan is not an evil source, but only when a table is relatively small. So, having additional index of such a table is redundant. However, a table with 100.000 records should not be considered as small but explain plan from such a table shows performed table full scan. So, I did small experiment on my laptop with Oracle installed locally:
1) first, created my_table:
CREATE TABLE my_table(
"ID" NUMBER NOT NULL ENABLE,
"INVOICE_NO" VARCHAR2(10),
CONSTRAINT "test _PK" PRIMARY KEY ("ID")
)
2) Then, created index for invoice_no column (because will filter using it):
CREATE INDEX "my_table_index1" ON my_table (invoice_no)
3) Then, inserted 100K records:
DECLARE
mod_val NUMBER;
BEGIN
FOR i IN 1..100000 LOOP
mod_val := MOD(i,6);
IF (mod_val = 0) THEN
INSERT INTO my_table (ID, INVOICE_NO) VALUES (i, '5570-110');
ELSIF (mod_val = 1) THEN
INSERT INTO my_table (ID, INVOICE_NO) VALUES (i, '5570-111');
ELSIF (mod_val = 2) THEN
INSERT INTO my_table (ID, INVOICE_NO) VALUES (i, '5570-112');
ELSIF (mod_val = 3) THEN
INSERT INTO my_table (ID, INVOICE_NO) VALUES (i, '5570-113');
ELSIF (mod_val = 4) THEN
INSERT INTO my_table (ID, INVOICE_NO) VALUES (i, '5570-114');
ELSIF (mod_val = 5) THEN
INSERT INTO my_table (ID, INVOICE_NO) VALUES (i, '5570-115');
END IF;
END LOOP;
COMMIT;
END;
4) Then updated one random record (just for emphasizing the select):
BEGIN
UPDATE my_table SET INVOICENO = 'exception' WHERE id = 50000;
COMMIT;
END;
5) Then performed select with explain plan:
EXPLAIN PLAN FOR
SELECT * FROM my_table WHERE invoice_no = 'exception';
6) Then grabbed the statistics:
SELECT * FROM TABLE(dbms_xplan.display);
7) and got the results:
"PLAN_TABLE_OUTPUT"
"Plan hash value: 3804444429"
" "
"------------------------------------------------------------------------------"
"| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |"
"------------------------------------------------------------------------------"
"| 0 | SELECT STATEMENT | | 83256 | 1626K| 103 (1)| 00:00:02 |"
"| 1 | TABLE ACCESS FULL| MY_TABLE | 83256 | 1626K| 103 (1)| 00:00:02 |"
"------------------------------------------------------------------------------"
" "
"Note"
"-----"
" - dynamic sampling used for this statement (level=2)"
Conclusion: It's strange and smells "magic", why Oracle decided not to use the index on invoice_no field and scanned 83256 records? I agree my laptop is not overloaded with concurrent users, a table is not quite big in size (contain numbers and varchars), however, I don't like this magic and would like to know the reasons for such behaviour :)
UPDATE: I just added some dummy value (see below) to invoice_no field to all records - just to increase the size of the table, however, table full scan remains: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
UPDATE2: I also executed analyze table but the result is the same:
ANALYZE TABLE my_table COMPUTE STATISTICS;
UPDATE3: tried to force to use index but the result is the same (maybe wrong syntax?):
EXPLAIN PLAN FOR
SELECT /*+ INDEX(my_table my_table_index1) */ * FROM my_table t WHERE invoice_no = 'exception'
UPDATE4: Finally, was able "to tell Oracle" to use the index - executed the new gather table stats procedure:
BEGIN
DBMS_STATS.GATHER_TABLE_STATS ( OWNNAME=>user
, TABNAME=>'my_table');
END;
Here's the output of explain plan:
"--------------------------------------------------------------------------------------"
"| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |"
"-----------------------------------------------------------------------------------------------"
"| 0 | SELECT STATEMENT | | 1 | 294 | 5 (0)| 00:00:01 |"
"| 1 | TABLE ACCESS BY INDEX ROWID| MY_TABLE | 1 | 294 | 5 (0)| 00:00:01 |"
"|* 2 | INDEX RANGE SCAN | my_table_index1 | 1 | | 4 (0)| 00:00:01 |"
"-----------------------------------------------------------------------------------------------"
" "
"Predicate Information (identified by operation id):"
"---------------------------------------------------"
" "
" 2 - access(""INVOICE_NO""='exception')"
So, it appears Oracle decides to use some querying approach at some time point, and is not updating it even if situation has changed. I agree with that but it's strange why it didn't selected the right approach for this test case when I just created, inserted and performed the select. Do we have always to execute DBMS_STATS.GATHER_TABLE_STATS at least at start to tell Oracle to use the best querying approach?
ANALYZE TABLEto gather statistics. It's deprecated and should not be used any longer. Usedbms_stats.gather_table_stats()instead. - a_horse_with_no_name