I have a PostgreSQL table that has some fields indexed and those have to be unique in order to prevent duplicates. This is made thanks to a PLPGSQL function that inserts all the fields and catches the unique_violation exception, altough it stops inserting the records even if there's just one duplicate.
I can't make several INSERTs due to performance issues (some of these are done in hundreds), the issue is that it stops all the process even if there's just one duplicate, like in the firest two values in the following example.
CREATE OR REPLACE FUNCTION easy_import() RETURNS VOID AS
$$
BEGIN
BEGIN
INSERT INTO things ("title", "uniq1", "uniq2") VALUES
('title 1', 100, 102),
('title 2', 100, 102),
('title 3', 101, 102),
('title 4', 102, 102),
('title 5', 103, 102),
('title 6', 104, 102),
('title 7', 105, 102),
('title 8', 106, 102),
('title 9', 107, 102),
('title 10', 108, 102);
RETURN;
EXCEPTION WHEN unique_violation THEN
-- do nothing
END;
END;
$$
LANGUAGE plpgsql;
Is there a way to ignore the unique_violation just for one record and prevent it from stopping further INSERTs?
Thank you.
Update
- The unique index has it on the "uniq1" and "uniq2" fields, I'm sorry about the confusion.
- While @cdhowie's solution seems to be the best, it somehow ignores the fact that if you run the same query, it will trigger an error. It's weird, because the query does the
JOINfor a reason. Still working on it.
(uniq1, uniq2)? - cdhowie