2
votes

We're starting to use Docker/CoreOS as our infrastructure. We're deploying on EC2. The CoreOS cluster is managed by an auto-scaling group so new hosts come and go. Plus, there's a lot of them. I'm trying to find a way to distribute a secret (private RSA key or a shared secret for a symmetric cypher) to all hosts so I can use that to securely distribute things like database credentials, AWS Access Keys for certain services, etc.

I'd like to obey "the principle of least privilege". Specifically, if I have 2 apps in 2 different containers running on the same host, each should only have access to the secrets that app needs. For example, app A might have access to the MySQL credentials, and app B might have access to AWS Access keys for Dynamo, but A can't access Dynamo and B can't access the MySQL.

If I had a secret on each server then this wouldn't be hard. I could use a tool like Crypt to read encrypted configuration data out of etcd and then use volume maps to selectively make credentials available to individual containers.

The question is, how the heck do I get the keys onto the host safely.

Here's some things I've considered and why they don't work:

  • Use AWS roles to grant each host access to a encrypted S3 bucket. The hosts can then read a shared secret from there. But this doesn't work because S3 has a REST API, Docker doesn't limit the network access containers have, and the role applies to the whole host. Thus, any container on that host can read the key out of S3, then read all the values out of etcd (which also has an unrestricted REST API) and decrypt them.
  • In my CloudFormation template I can have a parameter for a secret key. This then gets embedded in the UserData and distributed to all hosts. Unfortunately, any container can retrieve the key via the metadata service REST API.
  • Use fleet to submit a global unit to all the hosts and that unit copies the keys. However, containers can access fleet via it's REST API and do a "fleetctl cat" to see the key.
  • Put a secret key in a container in a private repo. That can then be distributed to all hosts as a global unit and an app in that container can copy the key out to a volume mount. However, I assume that given the credentials to the private repo somebody could download the container with standard network tools and extract the key (albeit with some effort). The problem then becomes how to distribute the .dockercfg with the credentials for the private repo securely which, I think, gets us right back where we started.

Basically, it seems like the core problem is that everything has a REST API and I don't know of a way to prevent containers from accessing certain network resources.

Ideas?

1

1 Answers

2
votes

If you're willing to save the secret in an AMI you could then use the Crypt solution you mentioned. I implemented something similar as follows:

  1. Generate a public/private keypair
  2. Bake the private key in to the AMI used for your autoscale groups
  3. Use the public key to encrypt the bootstrapping script, including secrets
  4. Base64 encode the encrypted bootstrap script
  5. Embed the encoded text in a wrapper script that decrypts using the private key, and use that as the userdata for the AWS launch configuration.

For example, the bootstrap script might look like this:

db="mysql://username:password@somehost:3306/somedb"
apikey="some_api_secret_key"
docker run --name "first container" -e db=$db -d MyImage MyCommand
docker run --name "second container" -e apikey=$apikey -d MyOtherImage MyOtherCommand

To encrypt, use openssl with smime to work around the low limits of rsautl. Assuming that the bootstrap script lives in /tmp/bootstrap.txt, it can be encrypted and encoded like this:

$ openssl smime --encrypt -aes256 -binary -outform D -in /tmp/bootstrap.txt /tmp/public.key | openssl base64 -e > /tmp/encrypted.b64

The wrapper script that becomes the userdata then might look like this:

#!/usr/bin/env bash -x 
exec >> /tmp/userdata.log 2>&1

cat << END > /tmp/bootstrap.dat
<contents of /tmp/encrypted.b64>
END
decrypted_blob=$(cat /tmp/bootstrap.dat | openssl base64 -d | openssl smime -decrypt -inform -D binary -inkey /path/to/secret.key
eval "${decrypted_blob}"
rm /tmp/bootstrap.dat

Now if the containers access EC2 metadata, they'll see the userdata script but it just has the encrypted blob. The private key is on the host, which containers don't have access to (theoretically).

Note also that the User Data size limit is 16KB, so the script and its encrypted data must be less than that.