I just came a cross an interesting issue with the BigQuery.
Essentially there is a batch job that recreates a table in BigQuery - to delete the data - and than immediately starts to feed in a new set through streaming interface.
Used to work like this for quite a while - successfully.
Lately it started to loose data.
A small test case has confirmed the situation – if the data feed starts immediately after recreating (successfully!) the table, parts of the dataset will be lost. I.e. Out of 4000 records that are being fed in, only 2100 - 3500 would make it through.
I suspect that table creation might be returning success before the table operations (deletion and creation) have been successfully propagated throughout the environment, thus the first parts of the dataset are being fed to the old replicas of the table (speculating here).
To confirm this I have put a timeout between the table creation and starting the data feed. Indeed, if the timeout is less than 120 seconds – parts of the dataset are lost.
If it is more than 120 seconds - seems to work OK.
There used to be no requirement for this timeout. We are using US BigQuery. Am I missing something obvious here?
EDIT: From the comment provided by Sean Chen below and a few other sources - the behaviour is expected due to the way the tables are cached and internal table id is propagated through out the system. BigQuery has been built for append-only type of operations. Re-writes is not something that one can easily accomodate into the design and should be avoided.