15
votes

I want to set an S3 bucket policy so that all requests to upload to that bucket will use server side encryption, even if it is not specified in the request header.

I have seen this post (Amazon S3 Server Side Encryption Bucket Policy problems) where someone has managed to set a bucket policy that denies all put requests that don't specify server side encryption, but I don't want to deny, I want the puts to succeed but use server side encryption.

My issue is with streaming the output from EMR to my S3 bucket, I don't control the code that is making the requests, and it seems to me that server side encryption must be specified on a per request basis.

5

5 Answers

10
votes

IMHO There is no way to automatically tell Amazon S3 to turn on SSE for every PUT requests. So, what I would investigate is the following :

  • write a script that list your bucket

  • for each object, get the meta data

  • if SSE is not enabled, use the PUT COPY API (http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html) to add SSE "(...) When copying an object, you can preserve most of the metadata (default) or specify new metadata (...)"

  • If the PUT operation succeeded, use the DELETE object API to delete the original object

Then run that script on an hourly or daily basis, depending on your business requirements. You can use S3 API in Python (http://boto.readthedocs.org/en/latest/ref/s3.html) to make it easier to write the script.

If this "change-after-write" solution is not valid for you business wise, you can work at different level (aligned with Julio's answer above)

  • use a proxy between your API client and S3 API (like a reverse proxy on your site), and configure it to add the SSE HTTP header for every PUT / POST requests. Developer must go through the proxy and not be authorised to issue requests against S3 API endpoints

  • write a wrapper library to add the SSE meta data automatically and oblige developer to use your library on top of the SDK.

The later today are a matter of discipline in the organisation, as it is not easy to enforce them at a technical level.

Seb

9
votes

Amazon just released new features for enforcing this, making it very easy

Excerpt from the docs on setting this up.

Note that any exiting bucket policies will be evaluated prior to any bucket encryption settings

Amazon S3 evaluates and applies bucket policies before applying bucket encryption settings. Even if you enable bucket encryption settings, your PUT requests without encryption information will be rejected if you have bucket policies to reject such PUT requests . Check your bucket policy and modify it if required

1
votes

S3 will not perform this automatically, you would have to create a workaround. I would suggest passing requests thru a proxy that would "enrich" them adding the proper header. To do that i'd try (in order):

1- Apache Camel Content Enrich

2- NGXINX / HTTPD mod_proxy

3- Custom code

I bet there is also a very smart ruby http lib for that too :)

1
votes

I use this quick bash script to encrypt an entire bucket.

#!/bin/bash

bucket=$1

echo "Encrypting Bucket: ${bucket}"
echo "---------------------------------"

# Bail if no Bucket Name Specified:
if [[ -z $1 ]]
then
  echo "Usage: ./encrypt_bucket.sh <bucket_name>"
  exit 1
fi

# Just so control-c will work and not keep trying to loop through:
trap "echo Stopping Encryption!; exit 1" SIGINT SIGTERM

function get_file_list() {
  file_list=$(aws s3 ls --summarize --recursive s3://${bucket} | awk '{print $NF}' | grep -v Total)
  num_files=$(aws s3 ls --summarize --recursive s3://${bucket} | grep "Total Objects:" |awk '{print $3}')
}

function s3cp() {
  aws s3 cp --sse s3://${bucket}/$1 s3://${bucket}/$1
}


get_file_list

num=0


for i in ${file_list}
 do
   num=$(( $num + 1 ))
   echo "Encrypting File: ${num} of ${num_files}"
   echo "_________________"
   s3cp ${i}
   echo ""
 done
0
votes

You can use AWS Lambda that sets server-side AES256 encryption for each object touched with a PUT operation. Check https://github.com/eleven41/aws-lambda-encrypt-s3-objects. I haven't tried it but looks exactly like the thing you want.