Setting up a Terraform backend leveraging an AWS s3 bucket is relatively easy.
First, create a bucket in the region of your choice (eu-west-1 for the example), named terraform-backend-store (remember to choose a unique name.)
To do so, open your terminal and run the following command, assuming that you have properly set up the AWS CLI (otherwise, follow the instructions at the official documentation):
aws s3api create-bucket --bucket terraform-backend-store \
--region eu-west-1 \
--create-bucket-configuration \
LocationConstraint=eu-west-1
# Output:
{
"Location": "http://terraform-backend-store.s3.amazonaws.com/"
}
The command should be self-explanatory; to learn more check the documentation here.
Once the bucket is in place, it needs a proper configuration for security and reliability.
For a bucket that holds the Terraform state, it’s common-sense enabling the server-side encryption. Keeping it simple, try first AES256 method (although I recommend to use KMS and implement a proper key rotation):
aws s3api put-bucket-encryption \
--bucket terraform-backend-store \
--server-side-encryption-configuration={\"Rules\":[{\"ApplyServerSideEncryptionByDefault\":{\"SSEAlgorithm\":\"AES256\"}}]}
# Output: expect none when the command is executed successfully
Next, it’s crucial restricting the access to the bucket; create an unprivileged IAM user as follows:
aws iam create-user --user-name terraform-deployer
# Output:
{
"User": {
"UserName": "terraform-deployer",
"Path": "/",
"CreateDate": "2019-01-27T03:20:41.270Z",
"UserId": "AIDAIOSFODNN7EXAMPLE",
"Arn": "arn:aws:iam::123456789012:user/terraform-deployer"
}
}
Take note of the Arn from the command’s output (it looks like: “Arn”: “arn:aws:iam::123456789012:user/terraform-deployer”).
To correctly interact with the s3 service and DynamoDB at a later stage to implement the locking, our IAM user must hold a sufficient set of permissions.
It is recommended to have severe restrictions in place for production environments, though, for the sake of simplicity, start assigning AmazonS3FullAccess and AmazonDynamoDBFullAccess:
aws iam attach-user-policy --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess --user-name terraform-deployer
# Output: expect none when the command execution is successful
aws iam attach-user-policy --policy-arn arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess --user-name terraform-deployer
# Output: expect none when the command execution is successful
The freshly created IAM user must be enabled to execute the required actions against your s3 bucket. You can do this by creating and applying the right policy, as follows:
cat <<-EOF >> policy.json
{
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:user/terraform-deployer"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::terraform-remote-store"
}
]
}
EOF
This basic policy file grants the principal with arn “arn:aws:iam::123456789012:user/terraform-deployer”, to execute all the available actions (“Action”: “s3:*") against the bucket with arn “arn:aws:s3:::terraform-remote-store”.
Again, in production is desired to force way stricter policies. For reference, have a look at the AWS Policy Generator.
Back to the terminal and run the command as shown below, to enforce the policy in your bucket:
aws s3api put-bucket-policy --bucket terraform-remote-store --policy file://policy.json
# Output: none
As the last step, enable the bucket’s versioning:
aws s3api put-bucket-versioning --bucket terraform-remote-store --versioning-configuration Status=Enabled
It allows saving different versions of the infrastructure’s state and rollback easily to a previous stage without struggling.
The AWS s3 bucket is ready, time to integrate it with Terraform. Listed below, is the minimal configuration required to set up this remote backend:
# terraform.tf
provider "aws" {
region = "${var.aws_region}"
shared_credentials_file = "~/.aws/credentials"
profile = "default"
}
terraform {
backend "s3" {
bucket = "terraform-remote-store"
encrypt = true
key = "terraform.tfstate"
region = "eu-west-1"
}
}
# the rest of your configuration and resources to deploy
Once in place, terraform must be initialized (again).
terraform init
The remote backend is ready for a ride, test it.
What about locking?
Storing the state remotely brings a pitfall, especially when working in scenarios where several tasks, jobs, and team members have access to it. Under these circumstances, the risk of multiple concurrent attempts to make changes to the state is high. Here comes to help the lock, a feature that prevents opening the state file while already in use.
You can implement the lock creating an AWS DynamoDB Table, used by terraform to set and unset the locks.
Provision the resource using terraform itself:
# create-dynamodb-lock-table.tf
resource "aws_dynamodb_table" "dynamodb-terraform-state-lock" {
name = "terraform-state-lock-dynamo"
hash_key = "LockID"
read_capacity = 20
write_capacity = 20
attribute {
name = "LockID"
type = "S"
}
tags {
Name = "DynamoDB Terraform State Lock Table"
}
}
and deploy it as shown:
terraform plan -out "planfile" && terraform apply -input=false -auto-approve "planfile"
Once the command execution is completed, the locking mechanism must be added to your backend configuration as follow:
# terraform.tf
provider "aws" {
region = "${var.aws_region}"
shared_credentials_file = "~/.aws/credentials"
profile = "default"
}
terraform {
backend "s3" {
bucket = "terraform-remote-store"
encrypt = true
key = "terraform.tfstate"
region = "eu-west-1"
dynamodb_table = "terraform-state-lock-dynamo"
}
}
# the rest of your configuration and resources to deploy
All done. Remember to run again terraform init
and enjoy your remote backend.