Never ever Read and Write from the same instance. Not even in Redshift. Even in general, any system that forces you to read and write from the same machine reflects a poor design.
Since you are discussing about Amazon Redshift, I can very comfortably assume that you have analytical data. (Redshift having a columnar architecture is optimised for reads and not write. So if you happen to store transactional data on Redshift, I would recommend you to reconsider your decision).
Before designing any infra about we talk about analytical data, we should always consider that:
- It'll be voluminous.
- and it'll be further scaled in the near future.
When you scale, reading and writing from the same machine will be catastrophic. And not to forget the locks. Delete / Truncate will hold Exclusive Locks on the table. If it happens that some other process user has already acquired this lock, then even the write on that table will fail, messing up the data.
The above reasons might be convincing enough on why not to use a single warehouse to read / write data.
Follow the below model, which is neat and clean, will never interfere and will ensure you don't face issues of consistency and locks etc:
+
|
|
| DS 1 +------------+ +------------+
+---------> | | | |
| | AGGREGATES | | reads
DS 2 | DW 1 +----------> | DW 2 | +----------->
+----------> | | | |
| | | |
+----------> +------------+ +------------+
|... DS n
|
+
where DS : Data Source , DW : Data Warehouse
The migration of data from DW 1 --> DW 2 will completely depend upon how frequent data you have to refer.