0
votes

I've created an s3 linked stage on snowflake called csv_stage with my aws credentials, and the creation was successful.

Now I'm trying to query the stage like below

select t.$1, t.$2 from @sandbox_ra.public.csv_stage/my_file.csv t

However the error I'm getting is

Failure using stage area. Cause: [The AWS Access Key Id you provided is not valid.]

Any idea why? Do I have to pass something in the query itself?

Thanks for your help!

Ultimately let's say my s3 location has 3 different csv files. I would like to load each one of them individually to different snowflake tables. What's the best way to go about doing this?

3
Have you provided access to files to Snowflake user ? - Digvijay S

3 Answers

1
votes

Regarding the last part of your question: You can load multiple files with one COPY INTO-command by using the file names or a certain regex-pattern. But as you have 3 different files for 3 different tables you also have to use three different COPY INTO-commands.

Regarding querying your stage you can find some more hints in these questions:

  1. Missing List-permissions on AWS - Snowflake - Failure using stage area. Cause: [The AWS Access Key Id you provided is not valid.] and
  2. https://community.snowflake.com/s/question/0D50Z00008EKjkpSAD/failure-using-stage-area-cause-access-denied-status-code-403-error-code-accessdeniedhow-to-resolve-this-error
  3. https://aws.amazon.com/de/premiumsupport/knowledge-center/access-key-does-not-exist/
0
votes

I found out the aws credential I provided was not right. After fixing that, query worked.

0
votes

This approach works to import data from S3 into a snowgflake Table from a public S3 bucket:

COPY INTO SNOW_SCHEMA.table_name  FROM  's3://test-public/new/solution/file.csv'