2
votes

I have Ansible playbooks ready, they includes several encrypted vars. With normal process, I can feed a vault password file to decrypt them with --vault-password-file ~/.vault_pass.txt and deploy the change to remote EC2 instance. So I needn't expose the password file.

But my request is different here. I need include ansible-playbook change in user-data script when create a new EC2 instance. Ideally I should automatically have all setting ready after the instance is running.

I deploy the instances with Terraform by below simple user-data script:

#!/usr/bin/bash

yum -y update
/usr/local/bin/aws s3 cp s3://<BUCKET>/ansible.tar.gz ansible.tar.gz
gtar zxvf ansible.tar.gz
cd ansible
ansible-playbook -i inventory/ec2.py -c local ROLE.yml

So I have to upload my password file into user-data script as well, if in the playbook, there are some encrypted vars.

Anything I can do to avoid it? Will Ansible Tower help for this request?

I did test with CredStash, but still a chicken and egg issue.

1
github.com/jhaals/ansible-vault, make a conclusion later.BMW

1 Answers

2
votes

If you want your instances to configure themselves they are going to either need all the credentials or another way to get the credentials, ideally with some form of one time pass.

The best I can think of off the top of my head is to use Hashicorp's Vault to store the credentials (potentially all of our secrets or maybe just the Ansible Vault password that then can be used to un-vault your Ansible variables) and have your deploy process create a one time use token that is injected into the user-data script via Terraform's templating.

To do this you'll probably want to wrap your Terraform apply command with some form of helper script that might look like this (untested):

#!/bin/bash

vault_host="10.0.0.3"
vault_port="8200"

response=`curl \
            -X POST \
            -H "X-Vault-Token:$VAULT_TOKEN" \
            -d '{"num_uses":"1"}' \
            http://${vault_host}:${vault_port}/auth/token/create/ansible_vault_read`

vault_token=`echo ${response} | jq '.auth.client_token' --raw-output`

terraform apply \
  -var 'vault_host=${vault_host}'
  -var 'vault_port=${vault_port}'
  -var 'vault_token=${vault_token}'

And then your user data script will want to be templated in Terraform with something like this (also untested):

template.tf:

resource "template_file" "init" {
    template = "${file("${path.module}/init.tpl")}"

    vars {
        vault_host  = "${var.vault_host}"
        vault_port  = "${var.vault_port}"
        vault_token = "${var.vault_token}"
    }
}

init.tpl:

#!/usr/bin/bash

yum -y update

response=`curl \
            -H "X-Vault-Token: ${vault_token}" \
            -X GET \
            http://${vault_host}:${vault_port}/v1/secret/ansible_vault_pass`

ansible_vault_password=`echo ${response} | jq '.data.ansible_vault_pass' --raw-output`

echo ${ansible_vault_password} > ~/.vault_pass.txt

/usr/local/bin/aws s3 cp s3://<BUCKET>/ansible.tar.gz ansible.tar.gz
gtar zxvf ansible.tar.gz
cd ansible
ansible-playbook -i inventory/ec2.py -c local ROLE.yml --vault-password-file ~/.vault_pass.txt

Alternatively you could simply have the instances call something such as Ansible Tower to trigger the playbook to be run against it. This allows you to keep the secrets on the central box doing the configuration rather than having to distribute them to every instance you are deploying.

With Ansible Tower this is done using callbacks and you will need to set up job templates and then have your user data script curl the Tower to trigger the configuration run. You could change your user data script to something like this instead:

template.tf:

resource "template_file" "init" {
    template = "${file("${path.module}/init.tpl")}"

    vars {
        ansible_tower_host      = "${var.ansible_tower_host}"
        ansible_host_config_key = "${var.ansible_host_config_key}"
    }
}

init.tpl:

#!/usr/bin/bash

curl \
  -X POST
  --data "host_config_key=${ansible_host_config_key}" \
  http://{${ansible_tower_host}/v1/job_templates/1/callback/

The host_config_key may seem to be a secret at first glance but it's a shared key that can be used for multiple hosts to access a job template and Ansible Tower will still only run if the host is either defined in a static inventory for the job template or if you are using dynamic inventories then if the host is found in that lookup.