I have a spring batch application, which is working good. It just reads from text file and write to to oracle table. It performs the loading in chuck. Currently I have configured with chuck size of 2000. The issue is, when I implement the skip listener for this job, spring ignoring the chunk size i have given and it is inserting just one record at a time into database. Skip listerner is just writing the invalid record to text file. Is this how spring batch works ?
0
votes
"it is inserting just one record at a time into database." - are you sure that it's done in separate transaction everytime? chunks needed for transaction boundaries.
– Igor Konoplyanko
I am just using spring provided JdbcBatchItemWriter. I am not writing any transaction related code. I just configured a JPATransactionManager as spring bean.
– Vasanthakumar86
1 Answers
0
votes
In a chunk, the ItemWriter
will always first attempt to write the entire list of items in the chunk. However, if a skippable exception is thrown, the framework needs to figure out which item(s) caused the error.
To do this, the transaction is rolled back and then the items are retried one-by-one. This allows any item(s) that may have caused the issue to be passed to your skip listener. Unfortunately, it also removes the batch-iness of the chunk.
In general, it is preferable (and will perform better) to do upfront validation in the processor, so you can "filter" the items out rather than throwing an exception and retrying the items individually.