0
votes

I’m looking for recommendations and help with an issue that I am having with setting up and managing bucket and bucket policy creation for multiple environments and multiple regions within a single environment.

I have 4 AWS accounts (dev, stg, prod1, prod2 which is a copy of prod1). In prod1 we have two kubernetes clusters aws-us-prod1 and aws-eu-prod1. These two clusters are completely independent of one another and they merely serve customers in those regions.

I have an applications running on these two different clusters (aws-us-prod1 and aws-eu-prod1) that need to write content to an S3 bucket. But these two clusters share an AWS account (prod1).

I’m trying to write some terraform resource automation to manage this, and I haven’t been able to variably control what region a bucket gets put in. The latest doc shows that there is a region attribute but it doesn’t work because of how the provider has been implemented with the aws provider region attribute.

What I’d like to do is something like this:

variable "buckets" {
    type = map(string) # e.g. buckets='{"a-us-prod1": "us-west-2", "a-eu-prod1":"eu-west-2"}'
}

resource "aws_s3_bucket" "my_buckets" {
    for_each = var.buckets

    bucket = each.key
    region = each.value
}

resource "aws_s3_bucket_policy" "my_buckets_policy" {
    for_each = aws_s3_bucket.my_buckets
    bucket = each.value.id
    policy = ...
}

I’ve tried using multiple providers using aliases, but you can’t programmatically use a provider based on the value of a variable you are iterating over. What’s the proper way to organize this project and resources to accomplish this?

These issues I have come across are related to this: https://github.com/hashicorp/terraform/issues/3656 https://github.com/terraform-providers/terraform-provider-aws/issues/5999

1
I'd question whether it's a good idea to apply out changes to both regions at the same time. If you were to separate these (maybe so your directory structure goes account/region/cluster or something like that) then it becomes a lot simpler and also minimises blast radius.ydaetskcoR

1 Answers

2
votes

The region attribute just got removed from the s3_bucket in terraform-provider-aws v3.0.0 from July 31, 2020. Before then you could set the region for a bucket and it would have been respected and the bucket would have been created in that selected region. However that was not how any other resource is managed, it was probably just there because S3 is globally scoped and the bucket has no region in the arn. All other services use the region of the provider itself (as it should be).

I would recommend to create the different providers for all the different regions you may want to support and then splitting the var.buckets according to their region and then create one resource "aws_s3_bucket" "this_region" { } for each region:

variable "buckets" {
    type = map(string) # e.g. buckets='{"a-us-prod1": "us-west-2", "a-eu-prod1":"eu-west-2"}'
}

provider "aws" {
    region = "eu-west-2"
    alias  = "eu-west-2"
}

locals {
    eu_west_2_buckets = [for name, region in var.buckets: name if region == "eu-west-2"]
}

resource "aws_s3_bucket" "eu_west_2_buckets" {
    count   = length(local.eu_west_2_buckets)
    bucket  = eu_west_2_buckets[count.index]
    provider = aws.eu-west-2
}

If you want to only rollout the buckets that match the current deployment region you can do that by simply changing the bucket filtering logic:

variable "buckets" {
    type = map(string) # e.g. buckets='{"a-us-prod1": "us-west-2", "a-eu-prod1":"eu-west-2"}'
}

locals {
    buckets = [for name, region in var.buckets: name if region == data.aws_region.current.name]
}

data "aws_region" "current" { }

resource "aws_s3_bucket" "buckets" {
    count   = length(local.buckets)
    bucket  = buckets[count.index]
}