3
votes

I am doing batch inserts using postgres 9.3-1100-jdbc41 JDBC4 driver.

According to the JDBC specification, its up to the application to disable autocommit and to commit or rollback the transaction.

In my case, I am not using any transaction(i.e., auto commit is true) but still the inserts are rolled back if one of the insert in the batch fails.

According to JDBC specification "If one of the commands in a batch update fails to execute properly, this method throws a BatchUpdateException, and a JDBC driver may or may not continue to process the remaining commands in the batch.". Here it does not says that previously executed commands will be rolled back.

Is my understanding wrong? If not why driver is behaving in this way and if yes what is the right behavior according to specification.

1
If you are using auto-commit all batched inserts are still done in a single transaction if I'm not mistaken. Think of it as a different way of writing a single insert that insert many rows (insert into .. values (..),(..),(..),(..)) - a_horse_with_no_name
Thanks for your reply. But what about the specification. I think It is just to avoid round trips over the network for each inserts and to apply transaction should be depending on the Application. On the db each inserts should be executed as a single unit. As a user I may add non related inserts in a single batch to avoid round trip, which should not be rolled back if some command fails. - Mukesh Negi
Well the specification says "may or may not continue to process" - Postgres choses to not continue. - a_horse_with_no_name
the specification says "may or may not continue to process further commands" I have also found this link but it is too old, postgresql.org/message-id/… - Mukesh Negi
So what's unclear about "may not continue"? - a_horse_with_no_name

1 Answers

1
votes

As far as I can tell the spec essentially leaves this up to the driver; it doesn't specify whether or not already-processed work is committed if the batch fails.

PgJDBC executes the batch in a transaction, so if any part of the batch fails then it will all be aborted.

If you feel this behaviour to be incorrect, the first thing you need to do is write a test case demonstrating that other drivers consistently behave a different way to PgJDBC and submit it to the PgJDBC issue tracker. We do not have time to research behaviour of other drivers, so you need to write the test case and run it some other popular databases (MS SQL Server, Oracle, DB2, MySQL, etc) or arrange to have it run by others. If you show that PgJDBC's behaviour differs from how other drivers handle batches then it'll be worth thinking about adding an option to change the behaviour (and working on making it the default eventually).