1
votes

I am getting Error (Occurrence of ORA-01000 is more compared to other )-

  1. ORA-01000: Maximum number of open cursors exceeded
  2. ORA-00604: Error at recursive SQL level 1
  3. [ne.jdbc.spi.SqlExceptionHelper] - No more data to read from socket

I will try to explain the code scenario that I am running.

  1. Service Call to DAO layer to delete records where I pass primary keys to find and delete from table.

    @Transactional(value = "txManagerResult")
    public void deleteCalculatedLkmsEntities(final List<Long> chkdLkmsIds) {
        this.daoResult. deleteCalculatedLkmsEntitiesEnhanced(chkdLkmsIds);
    }
    
  2. DAO method to delete records from 4 tables (having Parent and Child relationship on data but this relation is not maintained on Tables (i.e. there is no foreign key relationship).

    private void deleteCalculatedLkmsEntitiesEnhanced(final List<Long> chkdLkmsIds) {
    
         // Order of the delete is must be maintained as below 
    
         // this will delete  around 100000 rows for 1000 primary key of parent table
         this.deleteEntitiesByPrimaryKeyList("delete from Child_Table_4 where CHKD_PRODUCT_OFFERING_ID in ( "
                       + " select ID from Child_Table_2 where CHKD_LKMS_ID in ( "
                       + "select ID from Parent_Table_1 where id in (:pid) "
                       + ") )"
                       , "pid",
                       chkdLkmsIds);
    
    // this will delete  around 100000 rows for 1000 primary key of parent table
         this.deleteEntitiesByPrimaryKeyList("delete from Child_Table_3 where CHKD_PRODUCT_OFFERING_ID in ( "
                       + " select ID from Child_Table_2 where CHKD_LKMS_ID in ( "
                       + "select ID from Parent_Table_1 where id in (:pid) "
                       + ") )"
                       , "pid",
                       chkdLkmsIds);
    
    // this will delete  around 300000 rows for 1000 primary key of parent table
         this.deleteEntitiesByPrimaryKeyList("delete from Child_Table_2 where CHKD_LKMS_ID in ( "
                       + "select ID from Parent_Table_1 where id in (:pid) )"
                       , "pid",
                       chkdLkmsIds);
    
         // this will delete around 1000 rows
         this.deleteEntitiesByPrimaryKeyList("delete from Parent_Table_1 where ID in (:pid)"
                       , "pid",
                       chkdLkmsIds);
    
    } 
    private void deleteEntitiesByPrimaryKeyListFromDB(final String query, final String parameter,
                final List<Long> idsToDelete) {
         this.entityManager.createNativeQuery(query)
         .setParameter(parameter, idsToDelete).executeUpdate();
       }
    

The service method is called by 4 parallel threads which are managed by ForkJoinPool.

Error does not appear as soon as application start but it starts appearing after 10+ Hours of execution of the code. Above methods keep running (deleting records) by threads in cyclic manner. This process runs to complete 9 Million records execution.

Please help.

1
Can you also provide your callsHearen
Does anything ever commit the work being done - it doesn't look like it from what you've show, unless autocommit is on? And is entityManager.createNativeQuery() creating a new connection each time and nothing ever closes them?Alex Poole
@AlexPoole Commit part is taken care by the annotation Transactional on service level method. I too suspect something is open and not getting closed by entityManager.createNativeQuery() but I am not able to find that as this is the utility provided by JPA.Kamal Narayan Yadav
@Hearen : Call is made to the service method from Thread ( extends RecursiveAction) and those threads are managed by ForkJoinPool.Kamal Narayan Yadav
Do the 4 parallel threads reuse the same database connection? I thing this could be the problem, JDBC is AFAIK not multi threading proof.Marmite Bomber

1 Answers

0
votes

Looking at all the comments.. it could happen due to a lot of parallel calls. Assuming that em.createNativeQuery() is handling DB resource lifecycle properly. If you are using oracle DB, there is a restriction on maximum open cursors. You can increase the max number of open cursors at database level to fix this problem.