18
votes

We're using Google BigQuery via the Python API. How would I create a table (new one or overwrite old one) from query results? I reviewed the query documentation, but I didn't find it useful.

We want to simulate:

"SELEC ... INTO ..." from ANSI SQL.

3

3 Answers

16
votes

You can do this by specifying a destination table in the query. You would need to use the Jobs.insert API rather than the Jobs.query call, and you should specify writeDisposition=WRITE_APPEND and fill out the destination table.

Here is what the configuration would look like, if you were using the raw API. If you're using Python, the Python client should give accessors to these same fields:

"configuration": {
  "query": {
    "query": "select count(*) from foo.bar",
    "destinationTable": {
      "projectId": "my_project",
      "datasetId": "my_dataset",
      "tableId": "my_table"
    },
    "createDisposition": "CREATE_IF_NEEDED",
    "writeDisposition": "WRITE_APPEND",
  }
}
15
votes

The accepted answer is correct, but it does not provide Python code to perform the task. Here is an example, refactored out of a small custom client class I just wrote. It does not handle exceptions, and the hard-coded query should be customised to do something more interesting than just SELECT * ...

import time

from google.cloud import bigquery
from google.cloud.bigquery.table import Table
from google.cloud.bigquery.dataset import Dataset


class Client(object):

    def __init__(self, origin_project, origin_dataset, origin_table,
                 destination_dataset, destination_table):
        """
        A Client that performs a hardcoded SELECT and INSERTS the results in a
        user-specified location.

        All init args are strings. Note that the destination project is the
        default project from your Google Cloud configuration.
        """
        self.project = origin_project
        self.dataset = origin_dataset
        self.table = origin_table
        self.dest_dataset = destination_dataset
        self.dest_table_name = destination_table
        self.client = bigquery.Client()

    def run(self):
        query = ("SELECT * FROM `{project}.{dataset}.{table}`;".format(
            project=self.project, dataset=self.dataset, table=self.table))

        job_config = bigquery.QueryJobConfig()

        # Set configuration.query.destinationTable
        destination_dataset = self.client.dataset(self.dest_dataset)
        destination_table = destination_dataset.table(self.dest_table_name)
        job_config.destination = destination_table

        # Set configuration.query.createDisposition
        job_config.create_disposition = 'CREATE_IF_NEEDED'

        # Set configuration.query.writeDisposition
        job_config.write_disposition = 'WRITE_APPEND'

        # Start the query
        job = self.client.query(query, job_config=job_config)

        # Wait for the query to finish
        job.result()
0
votes

Create a table from query results in Google BigQuery. Assuming you are using Jupyter Notebook with Python 3 going to explain the following steps:

  1. How to create a new dataset on BQ (to save the results)
  2. How to run a query and save the results in a new dataset in table format on BQ

Create a new DataSet on BQ: my_dataset

bigquery_client  = bigquery.Client() #Create a BigQuery service object
dataset_id = 'my_dataset' 
dataset_ref = bigquery_client.dataset(dataset_id) # Create a DatasetReference using a chosen dataset ID.
dataset = bigquery.Dataset(dataset_ref)  # Construct a full Dataset object to send to the API.
dataset.location = 'US' # Specify the geographic location where the new dataset will reside. Remember this should be same location as that of source data set from where we are getting data to run a query

# Send the dataset to the API for creation. Raises google.api_core.exceptions.AlreadyExists if the Dataset already exists within the project.
dataset = bigquery_client.create_dataset(dataset)  # API request
print('Dataset {} created.'.format(dataset.dataset_id))

Run a query on BQ using Python:

There are 2 types here:

  1. Allowing Large Results
  2. Query without mentioning large result etc.

I am taking the Public dataset here: bigquery-public-data:hacker_news & Table id: comments to run a query.

Allowing Large Results

DestinationTableName='table_id1'  #Enter new table name you want to give
!bq query --allow_large_results --destination_table=project_id:my_dataset.$DestinationTableName 'SELECT * FROM [bigquery-public-data:hacker_news.comments]'

This query will allow large query results if required.

Without mentioning --allow_large_results:

DestinationTableName='table_id2'  #Enter new table name you want to give
!bq query destination_table=project_id:my_dataset.$DestinationTableName 'SELECT * FROM [bigquery-public-data:hacker_news.comments] LIMIT 100'

This will work for the query where the result is not going to cross the limit mentioned in Google BQ documentation. enter image description here

Output:

  1. A new dataset on BQ with the name my_dataset
  2. Results of the queries saved as tables in my_dataset

Note:

  1. These queries are Commands which you can run on the terminal(without ! in the beginning). But as we are using Python to run these commands/queries we are using !. This will enable us to use/run commands in the Python program as well.
  2. Also please upvote the answer :). Thank You.