0
votes

We have a Delphi application which can connect to either Oracle or SQL Server. We use Devart components to connect to the databases, and everything is very generic when it comes to database access. i.e. we use the lowest common denominator. Ultimately we use the databases as data stores and do not use any of the more "advanced" features which maybe specific to the database.

However we have a serious performance issue with Oracle. It is to do with inserting data. I know that inserting data by running off a load of insert statements is not great for performance, but due to some business logic that needs to be done on the raw data before it gets uploaded to the database, we are a little restricted to multiple inserts. To get an idea of performance differences, a recent test we did, inserts 1000 items into our database and takes 5 minutes in SQL Server (acceptable) but 44 minutes in Oracle.

Is there anything we could do to improve performance? The inserting of data needs to be done by the user and NOT an Oracle DBA, so absolutely no Oracle skills is one of the pre-requisites for any solution. Basically, the users need to press a button and everything is done.

Edit: Business Logic happens before the insert (although there is a little going on during the actual insert, so more realistic number would be 2 minutes for SQL Server and 40 or so minutes for Oracle. Bear in mind we are inserting a few large blobs per record, so perhaps that explains the slowish performance, but not why there is such a difference. The 1000 items are part of a transaction.

2
Don't you have enabled some sort autocommit option ? Are you inserting your data in a transaction ? Does UniDAC support DML arrays (hm, probably not) ? Btw. inserting 1000 rows on SQL Server for 5 minutes is really, really slow performance as well.TLama
I can't find it in your question so is the sequence of processing done as Apply Bussiness Logic on 1000 items -> Insert 1000 records or Apply Bussiness Logic on 1 item -> Insert 1 record -> repeat 1000 times? 5 minutes to insert 1000 records is way to long.Lieven Keersmaekers
Explained a bit further in the question above.Steve
Are you inserting using: a) dynamic/parametrised INSERT statements, b) Calling a stored procedure or c) Using Insert & Post or InsertRecord from a dataset component?Disillusioned
The first thing you need to find out is whether the problem is in Oracle or Delphi->Oracle. So you need to take into account your answer to the question in my previous comment. And benchmark 1000 inserts as similar as possible to what you're doing from Delphi. Obviously if that's also ~44 minutes, the problem lies in how the tables/indexes have been defined in Oracle. If direct Oracle performance is significantly better, then I suggest you use a DB profiler to monitor the actual calls being made to the database.Disillusioned

2 Answers

2
votes

Oracle supports array DML, which can speed up performance. Also if BLOB are involved, performance may depend on caching settings, and how the BLOB are setup in the destination table. Some db client parameters tuning may be also beneficial to increase network speed. Anyway, without knowing which version of Oracle you're using, how it is configured, your table(s) deinition (and its tablespaces), how large are the BLOBS, and the SQL actually used (did you trace it?), it's very difficult to diagnose the real problem. Oracle has some powerful diagnostic tools to identify bottlenecks, but they may not be easy to use and require to know enough about how Oracle works. From the Enterprise Manager Console you can access some of them in a more readable format - did you check it?

Update: because I can't comment to other answers, Oracle support differet type of LOB storage:

  • LOBs stored into the database (under transaction managment)
  • BFILES, external file system LOBs yet still managed by Oracle (LOB data not under transaction)
  • SecureFiles (11g onwards, alike BFILES but with transaction support and other features)

Oracle is designed for and can manage large LOBs - just it needs to be configured properly. Parameter that will affect LOB performance:

  • ENABLE/DISABLE STORAGE IN ROW
  • CACHE/NOCACHE/CACHE READS
  • LOGGING/NOLOGGING
  • CHUNK
  • PCTVERSION/RETENTION (especially for updates and deletes)
  • TABLESPACE (usually, a dedicated tablespace for lobs is advisable)

These parameters needs to be set taking into account the average LOB size, how LOBs are accessed, amd how often are modified. There's no "one size fits all".

But there are also the client side: OCI can buffer LOBs client side, so small read/write operations are cached, minimizing the number of network roundtrips and LOB versioning - that's up to the OCI wrapper you're using.

1
votes

Array DML (only available with FireDac, ODAC, DOA and our SynDbOracle unit afaik) won't change much if your problem is about blob transfer.

First idea is to compress the data before transmission.

Try several access libraries. Our open source SynDBOracle directly accesses the oci.dll client but may be slightly faster.

But perhaps the problem may be on the server side. Oracle does not like transactions with huge data, since it tends to overflow its wal files. Try to tune the write ahead log files of the table.

IMHO a rdbms is not the best option to store huge blobs. Plain files, indexed via a rdbms for metadata is usually better. Or switch to a big SQL storage, like key/value stores or mongodb blob api.

Remember that both Oracle and mssql do ask money proportional to the data size....