0
votes

COPY command skipping files while loading data from an s3 bucket in snowflake. Is there anyway I can get to know the number of files that were processed from the entire list of files. My command looks something like below:

   COPY INTO abcd.abcdefhg
   FROM 's3://dfsdc/asdfa/dfasdaf/sdfasd/'
   credentials = (AWS_KEY_ID = '********************' AWS_SECRET_KEY = 
   '************') 
   FILE_FORMAT = (
   COMPRESSION = 'GZIP'
   FIELD_DELIMITER = '|' RECORD_DELIMITER = '\n'
   SKIP_HEADER = 0
   FIELD_OPTIONALLY_ENCLOSED_BY = '"'
   TRIM_SPACE = FALSE
   ERROR_ON_COLUMN_COUNT_MISMATCH = TRUE
   ESCAPE = '\134'
   -- ESCAPE_UNENCLOSED_FIELD = 'NONE'
   ESCAPE_UNENCLOSED_FIELD=NONE
   DATE_FORMAT = 'AUTO' TIMESTAMP_FORMAT = 'AUTO'
   NULL_IF = ('')
    )
   ON_ERROR = CONTINUE;
1
Hi Hemanth, please take a look at the LOAD HISTORY view (docs.snowflake.net/manuals/sql-reference/info-schema/…) and the COPY HISTORY table function (docs.snowflake.net/manuals/sql-reference/functions/…). Also the VALIDATE command should have information about those files that have been skipped over due to errors - see here: docs.snowflake.net/manuals/sql-reference/functions/…Mike Donovan

1 Answers

3
votes

If the file already loaded into target table that file won't be processed again until you use option force = true, Also you can validate the load status of the using metadata view available under each database.

After completion of copy command you will get load status of each file in result panel.