10
votes

I have two accounts (acc-1 and acc-2).
acc-1 hosts an API that handles file uploads into a bucket of acc-1 (let's call it upload). An upload triggers a SNS to convert images or transcode videos. The resulting files are placed into another bucket in acc-1 (output) which again triggers a SNS. I then copy the files (as user api from acc-1) to their final bucket in acc-2 (content).

content bucket policy in acc-2

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::<ACC_1_ID>:user/api"
      },
      "Action": [
        "s3:PutObject",
        "s3:PutObjectAcl",
        "s3:GetObject"
      ],
      "Resource": "arn:aws:s3:::content/*"
    }
  ]
}

api user policy in acc-1

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:PutObjectAcl",
        "s3:GetObject",
        "s3:DeleteObject"
      ],
      "Resource": [
        "arn:aws:s3:::upload/*",
        "arn:aws:s3:::output/*",
        "arn:aws:s3:::content/*"
      ]
    }
  ]
}

I copy the files using the aws-sdk for nodejs and setting the ACL to bucket-owner-full-control, so that users from acc-2 can access the copied files in content although the api user from acc-1 is still the owner of the files.

This all works fine - files are stored in the content bucket with access for bucket-owner and the api user.

Files from content bucket are private for everyone else and should be served through a Cloudfront distribution.

I created a new Cloudfront distribution for web and used the following settings:

Origin Domain Name: content
Origin Path: /folder1
Restrict Bucket Access: yes
Origin Access Identity: create new identity
Grant Read Permissions on Bucket: yes, update bucket policy

This created a new Origin Access Identity and changed the bucket policy to:

content bucket policy afterwards

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::<ACC_1_ID>:user/api"
      },
      "Action": [
        "s3:PutObject",
        "s3:PutObjectAcl",
        "s3:GetObject"
      ],
      "Resource": "arn:aws:s3:::content/*"
    },
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity <OAI_ID>"
      },
      "Action": "s3:GetObject",
      "Resource": "arn:aws:s3:::content/*"
    }
  ]
}

But trying to access files from the content bucket inside the folder1 folder isn't working when I use the Cloudfront URL:

❌ https://abcdef12345.cloudfront.net/test1.jpg

This returns a 403 'Access denied'.

If I upload a file (test2.jpg) from acc-2 directly to content/folder1 and try to access it, it works ...!?

✅ https://abcdef12345.cloudfront.net/test2.jpg

Other than having different owners, test1.jpg and test2.jpg seem completely identical.

What am I doing wrong?

4

4 Answers

8
votes

Unfortunately, this is the expected behavior. OAIs can't access objects owned (created) by a different account because bucket-owner-full-control uses an unusual definition of "full" that excludes bucket policy grants to principals outside your own AWS account -- and the OAI's canonical user is, technically, outside your AWS account.

If another AWS account uploads files to your bucket, that account is the owner of those files. Bucket policies only apply to files that the bucket owner owns. This means that if another account uploads files to your bucket, the bucket policy that you created for your OAI will not be evaluated for those files.

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html#private-content-granting-permissions-to-oai

4
votes

As @Michael - sqlbot pointed out in his answer, this is the expected behavior.

A possible solution is to perform the copy to the final bucket using credentials from the acc-2 account, so the owner of the objects will be always the acc-2. There are at least 2 options for doing that:

1) Use Temporary Credentials and AssumeRole AWS STS API: you create an IAM Role in acc-2 with enough permissions to perform the copy to the content bucket (PutObject and PutObjectAcl), then from the acc-1 API you call AWS STS AssumeRole for getting temporary credentials by assuming the IAM Role, and perform the copy using these temporary access keys. This is the most secure approach.

2) Use Access Keys: you could create an IAM User in acc-2, generate regular Access Keys for it, and handle those keys to the acc-1, so the acc-1 uses those "permanent" credentials to perform the copy. Distributing access keys across AWS accounts is not a good idea from a security standpoint, and AWS discourages you from doing so, but it's certainly possible. Also, from a maintainability point of view can be a problem too - as acc-1 should store the Access Keys in a very safe way and acc-2 should be rotating Access Keys somewhat frequently.

1
votes

The solution to this is of two steps.

  1. Run below command using source account credentials
aws s3api put-object-acl --bucket bucket_name --key object_name --acl bucket-owner-full-control
  1. Run below command using destination account credentials
aws s3 cp s3://object_path  s3://object_path  --metadata-directive COPY
0
votes

My solution is using s3 putobject event and lambda. On putobject by acc-1, emit s3 putobject event, and the object override by acc-2's lambda. This is my program (Python3).

import boto3
from urllib.parse import unquote_plus


s3_client = boto3.client('s3')

def lambda_handler(event, context):
    for record in event['Records']:
        bucket = record['s3']['bucket']['name']
        key = unquote_plus(record['s3']['object']['key'])
        filename = '/tmp/tmpfile'
        s3_client.download_file(bucket, key, filename)
        s3_client.upload_file(filename, bucket, key)