1
votes

I have a problem with loading CSV data into snowflake table. Fields are wrapped in double quote marks and hence there is problem with importing them into table.

I know that COPY TO has CSV specific option FIELD_OPTIONALLY_ENCLOSED_BY = '"'but it's not working at all.

Here are some pices of table definition and copy command:

CREATE TABLE ...
(
GamePlayId NUMBER NOT NULL,
etc...
....);


COPY INTO ...
     FROM ...csv.gz'
FILE_FORMAT = (TYPE = CSV 
               STRIP_NULL_VALUES = TRUE 
               FIELD_DELIMITER = ',' 
               SKIP_HEADER = 1  
               error_on_column_count_mismatch=false 
               FIELD_OPTIONALLY_ENCLOSED_BY = '"'
              )
ON_ERROR = "ABORT_STATEMENT"
;

Csv file looks like this:

"3922000","14733370","57256","2","3","2","2","2019-05-23 14:14:44",",00000000",",00000000",",00000000",",00000000","1000,00000000","1000,00000000","1317,50400000","1166,50000000",",00000000",",00000000",",00000000",",00000000",",00000000",",00000000",",00000000",",00000000",",00000000",",00000000",",00000000",",00000000",",00000000",",00000000",",00000000",",00000000"

I get an error

'''Numeric value '"3922000"' is not recognized '''

I'm pretty sure it's because NUMBER value is interpreted as string when snowflake is reading "" marks, but since I use

FIELD_OPTIONALLY_ENCLOSED_BY = '"' 

it shouldn't even be there... Does anyone have some solution to this?

2
Typically, numeric and date fields are not quoted like this. The purpose of quoting a field is to allow field delimiters or record delimiters to be contained within a field, so there shouldn't be a reason to do that. Can you export the file so that numeric and datetime fields are not quoted?Mike Walton

2 Answers

2
votes

Maybe something is incorrect with your file? I was just able to run the following without issue.

1. create the test table:
CREATE OR REPLACE TABLE 
    dbNameHere.schemaNameHere.stacko_58322339 (
    num1    NUMBER,  
    num2    NUMBER, 
    num3    NUMBER);

2. create test file, contents as follows 
1,2,3
"3922000","14733370","57256"
3,"2",1
4,5,"6"

3. create stage and put file in stage 

4. run the following copy command
COPY INTO dbNameHere.schemaNameHere.STACKO_58322339
     FROM @stageNameHere/stacko_58322339.csv.gz
FILE_FORMAT = (TYPE = CSV 
               STRIP_NULL_VALUES = TRUE 
               FIELD_DELIMITER = ',' 
               SKIP_HEADER = 0  
               ERROR_ON_COLUMN_COUNT_MISMATCH=FALSE 
               FIELD_OPTIONALLY_ENCLOSED_BY = '"'
              )
ON_ERROR = "CONTINUE";

4. results 
+-----------------------------------------------------+--------+-------------+-------------+-------------+-------------+-------------+------------------+-----------------------+-------------------------+
| file                                                | status | rows_parsed | rows_loaded | error_limit | errors_seen | first_error | first_error_line | first_error_character | first_error_column_name |
|-----------------------------------------------------+--------+-------------+-------------+-------------+-------------+-------------+------------------+-----------------------+-------------------------|
| stageNameHere/stacko_58322339.csv.gz | LOADED |           4 |           4 |           4 |           0 | NULL        |             NULL |                  NULL | NULL                    |
+-----------------------------------------------------+--------+-------------+-------------+-------------+-------------+-------------+------------------+-----------------------+-------------------------+
1 Row(s) produced. Time Elapsed: 2.436s

5. view the records
>SELECT * FROM dbNameHere.schemaNameHere.stacko_58322339;
+---------+----------+-------+                                                  
|    NUM1 |     NUM2 |  NUM3 |
|---------+----------+-------|
|       1 |        2 |     3 |
| 3922000 | 14733370 | 57256 |
|       3 |        2 |     1 |
|       4 |        5 |     6 |
+---------+----------+-------+

Can you try with a similar test as this?

EDIT: A quick look at your data shows many of your numeric fields appear to start with commas, so something definitely amiss with the data.

0
votes

Assuming your numbers are European formatted , decimal place, and . thousands, reading the numeric formating help, it seems Snowflake does not support this as input. I'd open a feature request.

But if you read the column in as text then use REPLACE like

SELECT '100,1234'::text as A
    ,REPLACE(A,',','.') as B
    ,TRY_TO_DECIMAL(b, 20,10 ) as C;

gives:

A         B         C
100,1234  100.1234  100.1234000000

safer would be to strip placeholders first like

SELECT '1.100,1234'::text as A
  ,REPLACE(A,'.') as B
  ,REPLACE(B,',','.') as C
  ,TRY_TO_DECIMAL(C, 20,10 ) as D;