I'm querying bigquery (via databricks) with a service account with the following roles:
BigQuery Data Viewer
BigQuery Job User
BigQuery Metadata Viewer
BigQuery Read Session User
The query is:
SELECT distinct(column_name) FROM `project.dataset.INFORMATION_SCHEMA.COLUMNS` where data_type = "TIMESTAMP" and is_partitioning_column = "YES"
I'm actually querying via Azure Databricks:
spark.read.format("bigquery")
.option("materializationDataset", dataset)
.option("parentProject", projectId)
.option("query", query)
.load()
.collect()
But I'm getting:
"code" : 403,
"errors" : [ {
"domain" : "global",
"message" : "Access Denied: Table project:dataset._sbc_f67ac00fbd5f453b90....: Permission bigquery.tables.updateData denied on table project:dataset._sbc_f67ac00fbd5f453b90.... (or it may not exist).",
"reason" : "accessDenied"
} ],
After adding BigQuery Data Editor
the query works.
Why I need write permissions to view this metadata? Any lower permissions I can give?
In the docs I see that only data viewer is required, so I'm not sure what I'm doing wrong.