3
votes

The AWS CLI command tasks in Ansible playbooks work fine form command line if AWS credentials are specified as environment variables as per boto requirements. More info can be found here Environment Variables. But they fail to run in Tower because it exports another set of env. vars:

AWS_ACCESS_KEY
AWS_SECRET_KEY

In order to make them work in Tower just add the below in task definition:

environment:
  AWS_ACCESS_KEY_ID: "{{ lookup('env','AWS_ACCESS_KEY') }}"
  AWS_SECRET_ACCESS_KEY: "{{ lookup('env','AWS_SECRET_KEY') }}"

e.g. this task:

- name: Describe instances
  command: aws ec2 describe-instances --region us-east-1

will transform to:

- name: Describe instances
  command: aws ec2 describe-instances --region us-east-1
  environment:
    AWS_ACCESS_KEY_ID: "{{ lookup('env','AWS_ACCESS_KEY') }}"
    AWS_SECRET_ACCESS_KEY: "{{ lookup('env','AWS_SECRET_KEY') }}"

NOTE: This only injects env.var. for the specific task - not the whole playbook! So you have to amend this way every AWS CLI task.

2
using IAM roles makes authentication easier, and built-in modules (and get_facts) can be very helpful. - tedder42
IAM roles can help only when Ansible is run on EC2 instances in cloud.The above workaround is useful when Tower is run outside of AWS cloud. - Costas
IAM role is best but if it's not an option for whatever reason, this post explains how to set vars to make it less DRY. I am not related in any to the blog in question but it helped me. - mrlabbe

2 Answers

0
votes

Put your environment variable in a file:

export AWS_ACCESS_KEY=
export AWS_SECRET_KEY=

save the file in ~/.vars in the remote host and then in your playbook.

- name: Describe instances
  command: source ~/.vars && aws ec2 describe-instances --region us-east-2

for security you could delete the file after run and copy again in the next play.

0
votes

While this may not be applicable to tower we use the opensource version. Setup your .aws and/or .boto files.