3
votes

I was updating a S3 bucket policy today to allow a lambda function in a separate account to PutObjects into that bucket. Somehow while I was updating that policy I broke my external stage in Snowflake.

I can run the list@stage/subfolder command and see a list of all filenames in the stage. However if I attempt to

SELECT metadata$filename FROM @stage/subfolder

I receive the Failed to access remote file: access denied. Please check your credentials error.

I am connecting to snowflake via the third option (https://docs.snowflake.net/manuals/user-guide/data-load-s3-config.html). I have established an IAM user and provided the access key id/secret access key when establishing the external stage.
Everything was working until I made separate changes to the bucket policy.

Does the ability to list @stage but not select from the stage ring a bell to anyone? If not, I'll be happy to provide more specifics of the policies I've created.

3
running into this myself - did you find a solution?eamon1234

3 Answers

1
votes

See snowflake's troubleshooting documentation for ERROR: "FAILED TO ACCESS REMOTE FILE: ACCESS DENIED. PLEASE CHECK YOUR CREDENTIALS":

This error can be caused due to a few different reasons.

  1. Due to incorrect credentials defined while creating stage
  2. The access policy does not correctly include the resource to the bucket when set up using Storage Integration or IAM role
  3. The access policy might not have the KMS Key Id listed when the bucket is encrypted.

If you are able to list the contents of the bucket using the storage integration you created, I highly suspect it's related to bucket encryption. You can add KMS key access through something like:

"Sid": "Allow use of the key",
"Effect": "Allow",
"Principal": {
    "AWS": "arn:aws:iam::xxxxxx"
},
"Action": [
    "kms:Encrypt",
    "kms:Decrypt",
    "kms:ReEncrypt*",
    "kms:GenerateDataKey*",
    "kms:DescribeKey"
],
"Resource": "*"

You should only need Decrypt & GenerateDataKey if the integration only needs read access to the bucket contents

0
votes

On AWS, listing objects and accessing objects require different privileges. Are you sure that the IAM user has access to these files? You mentioned that the lambda function in a separate account, when uploading objects, do you give "bucket-owner-full-control"?

How change S3 file ownership while cross-account uploading

0
votes

For the record, I would recommend the first option - creating a Storage Integration, since there's no need to input credentials during the process.

To answer your question though, ensure that s3:GetObject and s3:GetObjectVersion permissions are granted in the policy, which is probably missing now.