2
votes

My SSIS package reads a small .txt file and loads into a table. No problems so far.

However, sometimes this package fails due to PK issues, which is understandable (duplicates are within the source table).

I'd like to allow the good data to continue to flow to the table, and to redirect the duplicates a different table for further investigation.

My thoughts were to simply route the red/flat file source error output to a new OLEDB Destination/table, and change the source Error Output to Error = Redirect Row.

However, I continue to get the error: Violation of PRIMARY KEY constraint.

screenshot

enter image description here

1
Use the error output of the destination, not the error output of the flat file source.Jeroen Mostert
Why not add a lookup to the table before writing to it? Matched records can then be redirected to your other table. Non matched records (i.e. error redirection) go to the existing tableMiguelH
@JeroenMostert That seems to send all my data (700odd rows) to the error table. None seem to reach the valid table.glass_kites
Yes, because the entire insert (which is batched) fails on the constraint violation. If you need to filter individual rows, crank down the batch size to single inserts (but this will be horribly inefficient) or follow the strategy proposed by Miguel to do a lookup and only insert rows known not to exist (but note that this is not atomic, so if other processes are inserting this can still cause duplicates), or bulk copy the whole file to a table and perform a T-SQL INSERT ... WHERE NOT EXISTS/MERGE (SSIS isn't always the most convenient tool for the job).Jeroen Mostert

1 Answers

0
votes

I would handle this by importing the entire file into a blank staging table and then calling a stored procedure that imports the new records into the final destination table, and imports any duplicate records into your other table.