5
votes

I've been trying for past couple of hours to setup a transfer from S3 to my google storage bucket.

The error that i keep getting, when creating the transfer is: "Invalid access key. Make sure the access key for your S3 bucket is correct, or set the bucket permissions to Grant Everyone."

Both the access key and the secret are correct, given that they are currently in use in production for S3 full access.

Couple of things to note:

  1. CORS-enabled on S3 bucket
  2. Bucket policy only allows authenticated AWS users to list/view its contents
  3. S3 requires signed URLs for access

Bucket Policy:

{
    "Version": "2008-10-17",
    "Id": "Policy234234234",
    "Statement": [
        {
            "Sid": "Stmt234234",
            "Effect": "Allow",
            "Principal": {
                "AWS": "*"
            },
            "Action": [
                "s3:AbortMultipartUpload",
                "s3:GetObjectAcl",
                "s3:RestoreObject",
                "s3:GetObjectVersion",
                "s3:DeleteObject",
                "s3:DeleteObjectVersion",
                "s3:PutObjectVersionAcl",
                "s3:PutObjectAcl",
                "s3:GetObject",
                "s3:PutObject",
                "s3:GetObjectVersionAcl"
            ],
            "Resource": "arn:aws:s3:::mybucket/*"
        },
        {
            "Sid": "2",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity xyzmatey"
            },
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::mybucket/*"
        },
        {
            "Sid": "3",
            "Effect": "Allow",
            "Principal": {
                "AWS": "*"
            },
            "Action": [
                "s3:ListBucket",
                "s3:GetBucketLocation"
            ],
            "Resource": "arn:aws:s3:::mybucket"
        }
    ]
}

CORS Policy

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <CORSRule>
        <AllowedOrigin>http://www.mywebsite.com</AllowedOrigin>
        <AllowedMethod>GET</AllowedMethod>
        <AllowedMethod>PUT</AllowedMethod>
        <AllowedMethod>DELETE</AllowedMethod>
        <MaxAgeSeconds>3000</MaxAgeSeconds>
        <AllowedHeader>*</AllowedHeader>
    </CORSRule>
    <CORSRule>
        <AllowedOrigin>*</AllowedOrigin>
        <AllowedMethod>GET</AllowedMethod>
        <AllowedHeader>AUTHORIZATION</AllowedHeader>
    </CORSRule>
    <CORSRule>
        <AllowedOrigin>*</AllowedOrigin>
        <AllowedMethod>HEAD</AllowedMethod>
        <AllowedHeader>AUTHORIZATION</AllowedHeader>
    </CORSRule>
</CORSConfiguration>

Any idea where i have gone wrong?

EDIT: I've setup the gsutil tool on a google compute instance and did a copy with the same AWS keys on the exact bucket. Worked like a charm..

5
Your bucket policy doesn't include "s3:ListBucket". I am guessing that the transfer service might need that in order to get a list of objects to transfer. Try adding it to the list? Of course, that wouldn't explain how gsutil manages to copy the bucket, so that may be wrong.Brandon Yarbrough
Hey brandon, i added the policy you mentioned above. Same results: Invalid key. Will see if i get a grasp on the support staff of google for this one. Thanks.Mysteryos

5 Answers

8
votes

I'm one of the devs on Transfer Service.

You'll need to add "s3:GetBucketLocation" to your permissions.

It would be preferable if the error you received was more specifically about your ACLs, however, rather than an invalid key. I'll look into that.

EDIT: Adding more info to this post. There is documentation which lists this requirement: https://cloud.google.com/storage/transfer/

Here's a quote from the section on "Configuring Access":

"If your source data is an Amazon S3 bucket, then set up an AWS Identity and Access Management (IAM) user so that you give the user the ability to list the Amazon S3 bucket, get the location of the bucket, and read the objects in the bucket." [Emphasis mine.]

EDIT2: Much of the information provided in this answer could be useful for others, so it will remain here, but John's answer actually got to the bottom of OP's issue.

6
votes

I am an engineer on Transfer service. The reason you encountered this problem is that AWS S3 region ap-southeast-1 (Singapore) is not yet supported by the Transfer service, because GCP does not have networking arrangement with AWS S3 in that region. We can consider to support that region now but your transfer will be much slower than other regions.

On our end, we are making a fix to display a clearer error message.

3
votes

You can also get the 'Invalid access key' error if you try to transfer a subdirectory rather than a root S3 bucket. For example, I tried to transfer s3://my-bucket/my-subdirectory and it kept failing with the invalid access key error, despite me giving read permissions to google for the entire S3 bucket. It turns out the google transfer service doesn't support transferring subdirectories of the S3 bucket, you must specify the root as the source for the transfer: s3://my-bucket.

3
votes

May Be this can help:

First, specify the S3_host in you boto config file, i.e., the endpoint-url containing region (No need to specify s3-host if the region is us-east-1, which is default). eg,

vi ~/.boto

s3_host = s3-us-west-1.amazonaws.com

That is it, Now you can proceed with any one of these commands:

gsutil -m cp -r s3://bucket-name/folder-name gs://Bucket/

gsutil -m cp -r s3://bucket-name/folder-name/specific-file-name gs://Bucket/

gsutil -m cp -r s3://bucket-name/folder-name/ gs://Bucket/*

gsutil -m cp -r s3://bucket-name/folder-name/file-name-Prefix gs://Bucket/**

You can also try rsync.

https://cloud.google.com/storage/docs/gsutil/commands/rsync

0
votes

I encountered the same problem couple of minutes ago. And I was easily able to solve it by giving admin access key and secret key.

It worked for me. just FYI, my s3 bucket was north-Virginia.