9
votes

I just came a cross an interesting issue with the BigQuery.

Essentially there is a batch job that recreates a table in BigQuery - to delete the data - and than immediately starts to feed in a new set through streaming interface.

Used to work like this for quite a while - successfully.

Lately it started to loose data.

A small test case has confirmed the situation – if the data feed starts immediately after recreating (successfully!) the table, parts of the dataset will be lost. I.e. Out of 4000 records that are being fed in, only 2100 - 3500 would make it through.

I suspect that table creation might be returning success before the table operations (deletion and creation) have been successfully propagated throughout the environment, thus the first parts of the dataset are being fed to the old replicas of the table (speculating here).

To confirm this I have put a timeout between the table creation and starting the data feed. Indeed, if the timeout is less than 120 seconds – parts of the dataset are lost.

If it is more than 120 seconds - seems to work OK.

There used to be no requirement for this timeout. We are using US BigQuery. Am I missing something obvious here?

EDIT: From the comment provided by Sean Chen below and a few other sources - the behaviour is expected due to the way the tables are cached and internal table id is propagated through out the system. BigQuery has been built for append-only type of operations. Re-writes is not something that one can easily accomodate into the design and should be avoided.

1
what exactly you are using to identify "success of table creation"?Mikhail Berlyant
Good point - I might have made an assumption here - bigquery.tables().delete(_projectId, _datasetId, _tableName).execute();Evgeny Minkevich
I actually checked getLastStatusCode - it does not change in 3 minutes (I was thinking if it could behave in the way BigQuery jobs do - where the client is supposed to check a few times if the job has completed. Nope - does not seem to be that.Evgeny Minkevich

1 Answers

10
votes

This is more or less expected due to the way that BigQuery streaming servers cache the table generation id (an internal name for the table).

Can you provide more information about the use case? It seems strange to delete the table then to write to the same table again.

One workaround could be to truncate the table, instead of deleting the it. You can do this by running SELECT * FROM <table> LIMIT 0, and the table as a destination table (you might want to use allow_large_results = true and disable flattening, which will help if you have nested data), then using write_disposition=WRITE_TRUNCATE. This will empty out the table but preserve the schema. Then any rows streamed afterwards will get applied to the same table.