0
votes

I have one table, want to partition by RANGE (created_at timestamp), so can delete old data easily (by drop partition).

CREATE TABLE `orders` (
  `order_id` NVARCHAR(64) NOT NULL,
  `amount` INTEGER NOT NULL,
  `created_at` TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
  `modified_at` TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  KEY `order_id` (`order_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;

ALTER TABLE dropship.orders
PARTITION BY RANGE (UNIX_TIMESTAMP(created_at)) (
    PARTITION p0 VALUES LESS THAN ( UNIX_TIMESTAMP('2019-03-01 00:00:00') ),
    PARTITION p1 VALUES LESS THAN ( UNIX_TIMESTAMP('2019-04-01 00:00:00') ),
    PARTITION p2 VALUES LESS THAN ( UNIX_TIMESTAMP('2019-05-01 00:00:00') ),
    PARTITION p3 VALUES LESS THAN ( UNIX_TIMESTAMP('2019-06-01 00:00:00') ),
    PARTITION p4 VALUES LESS THAN ( UNIX_TIMESTAMP('2019-07-01 00:00:00') ),
    PARTITION p5 VALUES LESS THAN (MAXVALUE)
);

This table only has two usages: get by order_id, or update by order_id.

select * from orders where order_id = '123';
update orders set amount = 10 where order_id = '123';

Due to the limitation of Mysql partitioning, I cannot add an unique key for order_id since will use created_at field for partitioning.

All columns used in the table's partitioning expression must be part of every unique key that the table may have, including any primary key.

Question:

Any way to make order_id unique in this table please?

I have thought about partitioning by order_id, but it's hard to delete old data in that way.

Any suggestion is welcome. (For example may be you have better design for this table).

1
"Any suggestion is welcome. (For example may be you have better design for this table)." Pretty sure MySQL can handle this situation without partitioning just fine...i assume you will not get millions of orders on a day which you need to delete in one time.... So mine suggestion is drop the partition usage/idea here.. - Raymond Nijland
Sorry, forget to mention the order amount, 80,000 orders per day is expected, and I want to maintain 1 year data, that's about 29 millions. - Lofrank
"I want to maintain 1 year data, that's about 29 millions. " Copy the data to a table that runs on the archive store engine ? I assume you don't need to query that old data alot or not at all? But orders data are important and archine engine does not support MVCC. also it does not support indexes to speed up queries.. - Raymond Nijland
"I want to maintain 1 year data, that's about 29 millions." MySQL can handle tables with millions or even in the billions record numbers just fine when indexed.. When in doubt you can always deploy RANGE/LIST partioning on YEAR() - Raymond Nijland

1 Answers

2
votes
BEGIN;
SELECT 1 FROM orders WHERE order_id = 234  FOR UPDATE;
if row exists, you have a dup error.
INSERT INTO orders ... order_id = 234;
COMMIT;

But, as Raymond points out, you may as well drop PARTITIONing and make the column the PRIMARY KEY. This would make all the stated operations slightly faster.