0
votes

Here is what I am trying to do:

I have access logs in account A which are encrypted default by AWS and I have lambda and s3 bucket in account B. I want to trigger the lambda when a new object lands on the account A s3 bucket and lambda in account B downloads the data and writes it to account B s3 bucket. Below are the blocks I am facing.

First approach: I was able to get the trigger from account A s3 new object to lambda in account B however, the lambda in account B is not able to download the object - Access Denied error. After looking for a couple of days, I figured that it is because the Access logs are encrypted by default and there is no way I can add lambda role to the encryption role policy so that it can encrypt/decrypt the log files. So moved on to the second approach.

Second approach: I have moved my lambda to Account A. Now the source s3 bucket and lambda are in Account A and destination s3 bucket is in Account B. Now I can process the Access logs in the Account A via Lambda in Account A but when it writes the file in the Account B s3 bucket I get Access denied error while downloaded/reading the file.

Lambda role policy: In addition to full s3 access and full lambda access.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Stmt1574387531641",
            "Effect": "Allow",
            "Action": "s3:*",
            "Resource": "*"
        },
        {
            "Sid": "Stmt1574387531642",
            "Effect": "Allow",
            "Action": "s3:*",
            "Resource": [
                "arn:aws:s3:::Account-B-bucket",
                "arn:aws:s3:::Account-B-bucket/*"
            ]
        }
    ]
}

Trust relationship

{   "Version": "2012-10-17",   "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "lambda.amazonaws.com",
        "AWS": "arn:aws:iam::Account-B-ID:root"
      },
      "Action": "sts:AssumeRole"
    }   ] }

Destination - Account B s3 bucket policy:

   {
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": [
                    "arn:aws:iam::Account-A-ID:role/service-role/lambda-role"
                ]
            },
            "Action": "s3:*",
            "Resource": [
                "arn:aws:s3:::Account-B-Bucket",
                "arn:aws:s3:::Account-B-Bucket/*"
            ]
        },
        {
            "Sid": "Stmt11111111111111",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::Account-A-ID:root"
            },
            "Action": "s3:*",
            "Resource": [
                "arn:aws:s3:::Account-B-Bucket",
                "arn:aws:s3:::Account-B-Bucket/*"
            ]
        }
    ] }

I am stuck here. I want lambda to be able to decrypt the access logs and read/process the data and write it to different account s3 bucket. Am I missing something? Help is much appreciated!

Adding file metadata: File property screenshot

Lambda Code:

s3 = boto3.client('s3')
# reading access logs from account A. Lambda is also running in account A.
response = s3.get_object(Bucket=access_log_bucket, Key=access_log_key)
body = response['Body']
content = io.BytesIO(body.read())
# processing access logs
processed_content = process_it(content)
# writting to account B s3 bucket
s3.put_object(Body=processed_content,
    Bucket=processed_bucket,
    Key=processed_key)
2
Could it be that the Trust Policy is referring to Account-B rather than Account-A?John Rotenstein
Your Lambda function does not seem to have any KMS permissions, so it won't be able to read the encrypted objects from Account-A-Bucket. Also, how are you copying the files? Are you using CopyObject(), or are you downloading then uploading? What specific command caused the Access Denied error?John Rotenstein
using the first approach, how do you propose to give KMS permissions? What I have read is that you need to add the lambda role to the key policy however, AWS default server-side encryption policy can't be updated. Also, I can't do CMK for access logs. Please let me know if I am missing here something.user1113186
For the second approach, as my lambda is in the same account as my source s3 bucket i.e. account A, I don't think I need to give it any KMS permissions. In this approach when my lambda is writing to the Account B s3 bucket the owner of the file is not correct, I guess. So when I am trying to download the file from the console or query the data from Athena boto3 I am getting access denied.user1113186
If you are writing/copying to Bucket B, you will need to specify the bucket-owner-full-control ACL so that Account B 'owns' the object.John Rotenstein

2 Answers

3
votes

Rather than downloading and then uploading the object, I would recommend that you use the copy_object() command.

The benefit of using copy_object() is that the object will be copied directly by Amazon S3, without the need to first download the object.

When doing so, the credentials you use must have read permissions on the source bucket and write permissions on the destination bucket. (However, if you are 'processing' the data, this of course won't apply.)

As part of this command, you can specify an ACL:

ACL='bucket-owner-full-control'

This is required because the object is being written from credentials in Account A to a bucket owned by Account B. Using bucket-owner-full-control will pass control of the object to Account B. (It is not required if using credentials from Account B and 'pulling' an object from Account A.)

0
votes

Thanks John Rotenstein for the direction. I found the solution. I only needed to add ACL='bucket-owner-full-control' in the put_object. Below is the complete boto3 cmd.

s3.put_object(
    ACL='bucket-owner-full-control'
    Body=processed_content,
    Bucket=processed_bucket,
    Key=processed_key)