I'm having some difficulties creating a table in Google BigQuery using CSV data that we download from another system.
The goal is to have a bucket in the Google Cloud Platform that we will upload a 1 CSV file per month. This CSV files have around 3,000 - 10,000 rows of data, depending on the month.
The error I am getting from the job history in the Big Query API is:
Error while reading data, error message: CSV table encountered too many errors, giving up. Rows: 2949; errors: 1. Please look into the errors[] collection for more details.
When I am uploading the CSV files, I am selecting the following:
- file format: csv
- table type: native table
- auto detect: tried automatic and manual
- partitioning: no partitioning
- write preference: WRITE_EMPTY (cannot change this)
- number of errors allowed: 0
- ignore unknown values: unchecked
- field delimiter: comma
- header rows to skip: 1 (also tried 0 and manually deleting the header rows from the csv files).
Any help would be greatly appreciated.