0
votes

I'd created a Cloud Functions for sending data to BigQuery The Cloud Functions is receiving data from pub/sub.

Scenario 1: I write a python code directly send JSON data to Bigquery, no problem

Scenario 2: I save the JSON data to .json file, and use bq load command to manually upload to Bigquery, no problem

Scenario 3: (Where error comes in) Cloud Functions can receive data from Pub/Sub, but cannot send it to BigQuery.

Here's the code fo Cloud Functions:

from google.cloud import bigquery
import base64, json, sys, os

def pubsub_to_bq(event, context):
   if 'data' in event:
      print("Event Data is found : " + str(event['data']))
      name = base64.b64decode(event['data']).decode('utf-8')
   else:
      name = 'World'
   print('Hello {}!'.format(name))


   pubsub_message = base64.b64decode(event['data']).decode('utf-8')
   print(pubsub_message)
   to_bigquery(os.environ['dataset'], os.environ['table'], json.loads(str(pubsub_message)))

def to_bigquery(dataset, table, document):
   bigquery_client = bigquery.Client()
   table = bigquery_client.dataset(dataset).table(table)
   
   job_config.source_format = bq.SourceFormat.NEWLINE_DELIMITED_JSON
   job_config = bq.LoadJobConfig()
   job_config.autodetect = True
   
   errors = bigquery_client.insert_rows_json(table,json_rows=[document],job_config=job_config)
   if errors != [] :
      print(errors, file=sys.stderr)

I've tried JSON data format with both types, but no luck. [{"field1":"data1","field2":"data2"}] OR {"field1":"data1","field2":"data2"}

All the error messages I could get from Cloud Functions Event Logs are: textPayload: "Function execution took 100 ms, finished with status: 'crash'"

Could any expert help me on this? Thanks.

1
Why do you want to do this? Instead write a dataflow job. And there are dataflow templates available for thisbigbounty
Your error could be related to a known issue being investigated here and is in the process of being fixed.Artemis Georgakopoulou

1 Answers

1
votes

If you have a look to the library code, the insert_rows_json you have this

    def insert_rows_json(
        self,
        table,
        json_rows,
        row_ids=None,
        skip_invalid_rows=None,
        ignore_unknown_values=None,
        template_suffix=None,
        retry=DEFAULT_RETRY,
        timeout=None,
    ):

No job_config parameter! The crash should come from this mistake

The method insert_rows_json performs a streaming insert and not a load job.

For a load job from JSON, you can use load_table_from_json method that you can also found in the source code of the library. The code base is similar to this (for the JobConfig option)

    def load_table_from_json(
        self,
        json_rows,
        destination,
        num_retries=_DEFAULT_NUM_RETRIES,
        job_id=None,
        job_id_prefix=None,
        location=None,
        project=None,
        job_config=None,
    ):