0
votes

I'm doing an insert as:

insert /*+ NOLOGGING */ into myTable
  select /*+ parallel(3) */ * from myTmpTable;

Here the explain plan:

------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                | Name                       | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
------------------------------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT         |                            |  110M |  5776M|    91428(2)| 00:03:30 |        |      |            |
|   1 |  LOAD TABLE CONVENTIONAL | myTable                    |       |       |            |          |        |      |            |
|   2 |   PX COORDINATOR         |                            |       |       |            |          |        |      |            |
|   3 |    PX SEND QC (RANDOM)   | :TQ10000                   |  110M |  5776M|    91428(2)| 00:03:30 |  Q1,00 | P->S | QC (RAND)  |
|   4 |     PX BLOCK ITERATOR    |                            |  110M |  5776M|    91428(2)| 00:03:30 |  Q1,00 | PCWC |            |
|   5 |      TABLE ACCESS FULL   | myTmpTable                 |  110M |  5776M|    91428(2)| 00:03:30 |  Q1,00 | PCWP |            |
------------------------------------------------------------------------------------------------------------------------------------

myTmpTable have a total of 110M of rows and the pk of myTable was already setted in the temporary tale.

When I run the query I've seen during the FULL ACCES when reached the 60M the performance decrease significantly, it load 1k/500 row at time when at the beginning load 5M.

Optimizer


NAME                                 TYPE    VALUE    
------------------------------------ ------- -------- 
optimizer_capture_sql_plan_baselines boolean FALSE    
optimizer_dynamic_sampling           integer 2        
optimizer_features_enable            string  11.2.0.4 
optimizer_index_caching              integer 0        
optimizer_index_cost_adj             integer 100      
optimizer_mode                       string  ALL_ROWS 
optimizer_secure_view_merging        boolean TRUE     
optimizer_use_invisible_indexes      boolean FALSE    
optimizer_use_pending_statistics     boolean FALSE    
optimizer_use_sql_plan_baselines     boolean TRUE 

EDIT

Here the statistics after an execution of a portion of 2M (more or less):

                        Total     For Execution   For Row
Executions                  1                 1     <0.01
Time Elapsed(sec)   22.862,82         22.862,82      0,01
Time CPU(sec)          673,69            673,69     <0.01
Buffer Requests    26.019.084        26.019.084     12,70
Disk Reads          3.073.558         3.073.558      1.50
Direct                      0                 0      0,00
Rows               2.0048.853        2.0048.853         1
FETCH                       0                 0         0
2
Is this the real query : how can you have 97M of rows and EXPLAIN PLAN only saying about 80000 ? Did you run DBMS_STATS ? EXPLAN PLAN says estimated query execution time is 1 second: what is the real execution time ? Please post output listed in community.oracle.com/thread/865295. - pifor
Edit with updated data, real execution time is that statement is running from 3h and loaded 72M of rows. - PhilCsr
If you select the data from a table you should never do a parallel query and a serial insert. Either perform a normal insert select in noparallel mode, or enable PARALLEL_DML. Additionaly with your milaege you will very probaly find a trigger on the target table - use APPEND hint to skip it and compare the response time. - Marmite Bomber
Unfortunately I can't use the append hint bcs I can't lock te table during the inert operation, same thing for parallel dml. - PhilCsr
110 M rows and you can't get a downtime? Whoever required this will have to be happy with a very slow insert statement. You could think of "partition exchange" if your data fits... - wolφi

2 Answers

0
votes

I think the NOLOGGING is wrong. It should go to an alter table statement:

ALTER TABLE mytable NOLOGGING;

ALTER SESSION ENABLE PARALLEL DML;

Instead of the nologging, the hint should be APPEND:

insert /*+ APPEND */ into myTable
  select * from myTmpTable;
0
votes

Your INSERT statement is very very very slow because I have tested a similar request on my home PC with Oracle 11.2.0.4 in a Linux VM.

I have a 9 GB table (134M rows) that I can copy without parallelism in 11 minutes:

OPS$ORACLE@FTEX>explain plan for insert into nt select * from t;

Explained.

OPS$ORACLE@FTEX>select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------
Plan hash value: 1601196873

---------------------------------------------------------------------------------
| Id  | Operation        | Name | Rows  | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------
|   0 | INSERT STATEMENT     |  |   134M|  9472M|   402K  (1)| 01:20:28 |
|   1 |  LOAD TABLE CONVENTIONAL | NT   |   |   |        |      |
|   2 |   TABLE ACCESS FULL  | T    |   134M|  9472M|   402K  (1)| 01:20:28 |
---------------------------------------------------------------------------------

9 rows selected.

OPS$ORACLE@FTEX>set timing on
OPS$ORACLE@FTEX>insert into nt select * from t;

134217728 rows created.

Elapsed: 00:11:16.40             
OPS$ORACLE@FTEX>commit;

But without the real execution plan it is impossible to know where the time is spent (CPU ? IO ? something else ???): we can only try to guess and "shoot in the dark".