2
votes

I've run into a situation in our database where I need to shift the timestamps of a number of records by a day, however I have a unique constraint that requires the an id field and a timestamp field to be unique.

Here is the table description.

Table "public.eedata"
   Column    |              Type              |                         Modifiers                          
-------------+--------------------------------+------------------------------------------------------------
 eedata_id   | bigint                         | not null default nextval('eedata_eedata_id_seq'::regclass)
 user_id     | integer                        | 
 eeupload_id | bigint                         | 
 eetimestamp | timestamp(0) without time zone | 
Indexes:
    "pk_eedata" PRIMARY KEY, btree (eedata_id)
    "eedata_user_id_key" UNIQUE, btree (user_id, eetimestamp)
    "fki_eeuploadid" btree (eeupload_id)
Foreign-key constraints:
    "fk_eeupload_id" FOREIGN KEY (eeupload_id) REFERENCES eeupload(eeupload_id) ON UPDATE CASCADE ON DELETE CASCADE
    "fk_user_id" FOREIGN KEY (user_id) REFERENCES users(user_id) ON UPDATE CASCADE ON DELETE CASCADE

The problem in this case is caused by the eedata_user_id_key constraint. I can successfully subtract a day using,

update eedata set eetimestamp = eetimestamp - interval '1 day' where eeupload_id = xxx;

because the order that it applies the update in prevents any collisions, however when I try

update eedata set eetimestamp = eetimestamp + interval '1 day' where eeupload_id = xxx;

I get

ERROR:  duplicate key violates unique constraint "eedata_user_id_key"

What I need to be able to do is either specify the order in which the update is applied (effectively an order by for an update statement) or the ability to suspend the constraint for a single update statement.

I'm using Postgres 8.1.11 if that matters.

2

2 Answers

3
votes

Could you just remove the constraint for the duration of your update, and then add it back in when you're done?

0
votes

I recently had a similar issue - I have test data that ends September 2008, and I needed it to be more recent. I moved it up six months, but had some duplicates. What I needed to do is detect the duplicates before they were created, and do an UPDATE of the existing rows instead of an INSERT of new rows.