1
votes

Trying to run Terraform CLI v0.12.28 on AWS EC2 via the user-data script. When the instance is provisioned it should initiate infrastructure build out automatically.

The infra to build could be on other clouds or accounts. The credentials are stored in SSM Parameter Store. The EC2 Instance has a Role Profile to permit access to Parameter Store.

export TF_LOG=TRACE
export TF_IN_AUTOMATION=1
export AWS_PROFILE=digital_ocean
export AWS_SDK_LOAD_CONFIG=1
AWS_EC2_METADATA_DISABLED=true
/usr/local/bin/terraform init -input=false

The problem is that Terraform init fails when sending X-AMZ-SECURITY-TOKEN header. The header is not understood by the other cloud provider (Digital Ocean - using the AWS S3 API).

[INFO] Attempting to use session-derived credentials
[INFO] Successfully derived credentials from session
[INFO] AWS Auth provider used: "EC2RoleProvider"
...
HTTP/1.1 501 Not Implemented
Connection: close
Content-Length: 248
Content-Type: application/xml
Strict-Transport-Security: max-age=15552000; includeSubDomains; preload
X-Amz-Error-Code: NotImplemented
X-Amz-Error-Message: Server does not support one or more requested headers. Please see https://developers.digitalocean.com/documentation/spaces/#aws-s3-compatibility
X-Do-Spaces-Error: unsupported_header_x-amz-security-token
...
status code: 501, request id: , host id: 
Error refreshing state: NotImplemented: Server does not support one or more requested headers. Please see https://developers.digitalocean.com/documentation/spaces/#aws-s3-compatibility

Executing the same commands manually after provisioning succeeds.

[INFO] AWS Auth provider used: "SharedCredentialsProvider"

The key difference seems to be the AWS Auth provider used.

My question: How can I persuade Terraform to use only the SharedCredentialsProvider even when the EC2 instance has a role profile?

Also, why would there be a difference in the Auth provider when a user logs in and executes the same commands manually (sudo su - root ...)?

Platform: EC2 / Amazon Linux 2

1

1 Answers

0
votes

The problem was that the user-data environment was missing some variables.

I added the following to the user-data script and TF init works as expected:

export AWS_AUTO_SCALING_HOME=/opt/aws/apitools/as
export AWS_CLOUDWATCH_HOME=/opt/aws/apitools/mon
export AWS_ELB_HOME=/opt/aws/apitools/elb
export AWS_PATH=/opt/aws
export EC2_AMITOOL_HOME=/opt/aws/amitools/ec2
export EC2_HOME=/opt/aws/apitools/ec2
export HOME=/root
export LOGNAME=root
export PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/opt/aws/bin:/root/bin
export SHELL=/bin/bash
export USER=root

Actually, to be more specific, the user-data script performs some setup (creates directories, installs packages, etc.) and defers the TF invocation using 'at'. This is to enable to EC2 instance to boot properly.

The environment variables are in the 'at' script

# yum install ... &co
#
cat > /run/tf/run_tf <<EOF
#!/bin/bash
cd /run/tf/XXX

export TF_VAR_zzz="${...}"
export TF_LOG=TRACE
export TF_IN_AUTOMATION=1
export AWS_PROFILE="..."
export AWS_SDK_LOAD_CONFIG=1
export AWS_EC2_METADATA_DISABLED=true

export AWS_AUTO_SCALING_HOME=/opt/aws/apitools/as
export AWS_CLOUDWATCH_HOME=/opt/aws/apitools/mon
export AWS_ELB_HOME=/opt/aws/apitools/elb
export AWS_PATH=/opt/aws
export EC2_AMITOOL_HOME=/opt/aws/amitools/ec2
export EC2_HOME=/opt/aws/apitools/ec2
export HOME=/root
export LOGNAME=root
export PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/opt/aws/bin:/root/bin
export SHELL=/bin/bash
export USER=root

/usr/local/bin/terraform init -input=false
/usr/local/bin/terraform plan -input=false -out=tfplan
EOF
chmod 0755 /run/tf/run_tf
at now +2 minutes -f /run/tf/run_tf