From what I've read:
Table or Views data access mode commits each row at a time as a transaction. Thus, making a package transferring 5 million rows take long (30+ minutes).
Fast-load data access mode allows one to specify the batch rows and the commit size when inserting to destination. For example, inserting 5 million records would take just over 2 minutes.
Now the question arises where one of the SSIS packages that loads to the DW uses Table or View data access mode in the OLE DB destination. From my understanding, this is in order to pick up the error rows which it inserts (error constraint) into an error records table. Thus, we have a process that takes over 30 minutes. In turn, Fast-Load would take less than 2 minutes for the same action.
If I understand correctly, fast-load would be unable to differentiate which row caused the error in the batch which in turn fails the batch completely? If so, is there an alternative method to this situation where the batch with the error row is redirected by the error constraint and then worked into the destination in a way where the good records in the batch are sent to the correct destination while still sending the error record into the error log table? Is it a good idea to even do so? Is it better to bite the bullet sort of speak in regards to the amount of time it takes?
Thank you in advance.