We have recently partitioned most of our tables in BigQuery using the following method:
Run a Dataflow pipeline which reads a table and writes the data to a new partitioned table.
Copy the partitioned table back to the original table using a copy job with write truncate set.
Once complete the original table is replaced with the data from the newly created partitioned table, however, the original table is still not partitioned. So I tried the copy again, this time deleting the original table first and it all worked.
The problem is it takes 20 minutes to copy our partitioned table which would cause downtime for our production application. So is there any way of doing write truncate with a partitioned table replacing a non-partitioned table without causing any downtime? Or will we need to delete the table first in order to replace it?