1
votes

I wonder if BQ limitations have decreased because a former view which had no problem suddenly throws "Resources exceeded during query execution: Table metadata used for the query is too large".

Here is a sample query :

SELECT COUNT(*) FROM [myproject:mydataset.myview]

My view is :

SELECT *
FROM TABLE_QUERY([myproject:daily_mydataset],'REGEXP_MATCH(table_id, "^mytable_")')

I have 61 fields per table and 376 tables (under quota). I deleted about 120 tables so it means that something changed recently (the query worked few days ago).

Maybe quotas have decreased for legacy SQL ?

NOTE : I tried to replace * by fields names in the view, it changed nothing

Thanks

1
The amount of data in your tables has probably changed not the limit. Previously you didn't hit the limit because of the data volume you had, but now you are above the limit.Pentium10
I removed 120 tables so it made me back to a lot less data. I mean I just have 15M rows. I actually tried to query 2018 january to september (inclusive) using regex but it failed too. Table size is pretty stable (40k rows)Joseph Yourine
The problem encountered here is related to table metadata, not to the data within the table. In other words, this is related to descriptions you might have added to table columns, and not the number of rows in your table. Having seen a lot of queries starting failing over night, this makes me think (like you) that there's been a change in processing logic in BigQuery.Sigbjørn

1 Answers

0
votes

I suggest you try to use standard SQL

#legacySQL 
SELECT * FROM TABLE_QUERY([myproject:daily_mydataset],
'REGEXP_MATCH(table_id, "^mytable_")';

The full migration syntax can be found in this link

If this doesn't help please provide more information after reducing the size of your return data using the WHERE clauses