1
votes

I am currently having two (maybe conflicting) S3 bucket policies, which show a permanent difference on Terraform. Before I show parts of the code, I will try to give an overview of the structure.

I am currently using a module, which:

  1. Takes IAM Role & an S3 Bucket as inputs
  2. Attaches S3 Bucket policy to the inputted role
  3. Attaches S3 Bucket (allowing VPC) policy to the inputted S3 bucket

I have created some code (snippet and not full code) to illustrate how this looks like for the module.

The policies look like:

# S3 Policy to be attached to the ROLE
data "aws_iam_policy_document" "foo_iam_s3_policy" {
  statement {
    effect    = "Allow"
    resources = ["${data. s3_bucket.s3_bucket.arn}/*"]
    actions   = ["s3:GetObject", "s3:GetObjectVersion"]
  }
  statement {
    effect    = "Allow"
    resources = [data.s3_bucket.s3_bucket.arn]
    actions   = ["s3:*"]
  }
}

# VPC Policy to be attached to the BUCKET
data "aws_iam_policy_document" "foo_vpc_policy" {
  statement {
    sid       = "VPCAllow"
    effect    = "Allow"
    resources = [data.s3_bucket.s3_bucket.arn, "${data.s3_bucket.s3_bucket.arn}/*"]
    actions   = ["s3:GetObject", "s3:GetObjectVersion"]
    condition {
      test     = "StringEquals"
      variable = "aws:SourceVpc"
      values   = [var.foo_vpc]
    }
    principals {
      type        = "*"
      identifiers = ["*"]
    }
  }
}

The policy attachments look like:

# Turn policy into a resource to be able to use ARN
resource "aws_iam_policy" "foo_iam_policy_s3" {
  name        = "foo-s3-${var.s3_bucket_name}"
  description = "IAM policy for foo on s3"
  policy      = data.aws_iam_policy_document.foo_iam_s3_policy.json
}

# Attaches s3 bucket policy to IAM Role
resource "aws_iam_role_policy_attachment" "foo_attach_s3_policy" {
  role       = data.aws_iam_role.foo_role.name
  policy_arn = aws_iam_policy.foo_iam_policy_s3.arn
}

# Attach foo vpc policy to bucket
resource "s3_bucket_policy" "foo_vpc_policy" {
  bucket = data.s3_bucket.s3_bucket.id
  policy = data.aws_iam_policy_document.foo_vpc_policy.json
}

Now let's step outside of the module, where the S3 bucket (the one I mentioned that will be inputted into the module) is created, and where another policy needs to be attached to it (the S3 bucket). So outside of the module, we:

  1. Provide an S3 bucket to the aforementioned module as input (alongside the IAM Role)
  2. Create a policy to allow some IAM Role to put objects in the aforementioned bucket
  3. Attach the created policy to the bucket

The policy looks like:

# Create policy to allow bar to put objects in the bucket
    data "aws_iam_policy_document" "bucket_policy_bar" {
      statement {
        sid       = "Bar IAM access"
        effect    = "Allow"
        resources = [module.s3_bucket.bucket_arn, "${module. s3_bucket.bucket_arn}/*"]
        actions   = ["s3:PutObject", "s3:GetObject", "s3:ListBucket"]
        principals {
          type        = "AWS"
          identifiers = [var.bar_iam]
        }
      }
    }

And its attachment looks like:

    # Attach Bar bucket policy
    resource "s3_bucket_policy" "attach_s3_bucket_bar_policy" {
      bucket = module.s3_bucket.bucket_name
      policy = data.aws_iam_policy_document.bucket_policy_bar.json
    }

(For more context: Basically foo is a database that needs VPC and s3 attachment to role to operate on the bucket and bar is an external service that needs to write data to the bucket)

What is going wrong

When I try to plan/apply, Terraform shows that there is always change, and shows an overwrite between the S3 bucket policy of bar (bucket_policy_bar) and the VPC policy attached inside the module (foo_vpc_policy).

In fact the error I am getting kind of sounds like what is described here:

The usage of this resource conflicts with the aws_iam_policy_attachment resource and will permanently show a difference if both are defined.

But I am attaching policies to S3 and not to a role, so I am not sure if this warning applies to my case.

Why are my policies conflicting? And how can I avoid this conflict?

EDIT: For clarification, I have a single S3 bucket, to which I need to attach two policies. One that allows VPC access (foo_vpc_policy, which gets created inside the module) and another one (bucket_policy_bar) that allows IAM role to put objects in the bucket

2
Can you clarify your setup? So you have two buckets, one in module module.aws_s3_bucket and the other in module.s3_bucket, and you want to mix their policies?Marcin
Hi @Marcin. I have edited my question. Let me know if I can provide any further clarificationalt-f4
So module.s3_bucket.bucket_name and module.aws_s3_bucket.bucket_arn refere to the same bucket, despite being in different modules?Marcin
They refer to the same bucket (it's also the same module). I made the typo when I was making up the question (in my actual code they are the same). Will fix in the question +1alt-f4
@Marcin It's basically the same bucket. It gets created outside of the module in s3.tf then there are two (supposed to be independent) steps: 1. Passing it to the module (which attaches some VPC access policy) 2. Attaching a policy that gives an IAM Role access to the bucket (happens in s3.tf) The VPC access is needed for the database to be able to get data from S3. And the IAM Role access is needed so an external tool can put files into the bucket. I would have expected that there are two policies that get attached to the bucket. But I end up with a conflictalt-f4

2 Answers

1
votes

there is always change

That is correct. aws_s3_bucket_policy sets new policy on the bucket. It does not add new statements to it.

Since you are invoking aws_s3_bucket_policy twice for same bucket, first time in module.s3_bucket module, then second time in parent module (I guess), the parent module will simply attempt to set new policy on the bucket. When you perform terraform apply/plan again, the terraform will detect that the policy defined in module.s3_bucket is different, and will try to update it. So you end up basically with a circle, where each apply will change the bucket policy to new one.

I'm not aware of a terraform resource which would allow you to update (i.e. add new statements) to an existing bucket policy. Thus I would try to re-factor your design so that you execute aws_s3_bucket_policy only once with all the statements that you require.

0
votes

Thanks to the tip from Marcin I was able to resolve the issue by making the attachment of the policy inside the module optional like:

# Attach foo vpc policy to bucket
resource "s3_bucket_policy" "foo_vpc_policy" {
  count  = var.attach_vpc_policy ? 1 : 0 # Only attach VPC Policy if required
  bucket = data.s3_bucket.s3_bucket.id
  policy = data.aws_iam_policy_document.foo_vpc_policy.json
}

The policy in all cases has been added as output of the module like:

# Outputting only the statement, as it will later be merged with other policies
output "foo_vpc_policy_json" {
  description = "VPC Allow policy json (to be later merged with other policies that relate to the bucket outside of the module)"
  value       = data.aws_iam_policy_document.foo_vpc_policy.json
}

For the cases when it was needed to defer the attachment of the policy (wait to attach it together with another policy), I in-lined the poliicy via source_json)

data "aws_iam_policy_document" "bucket_policy_bar" {
  # Adding the VPC Policy JSON as a base for this Policy (docs: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document)
  source_json = module.foor_.foo_vpc_policy_json # here we add the statement that has
  statement {
    sid       = "Bar IAM access"
    effect    = "Allow"
    resources = [module.s3_bucket_data.bucket_arn, "${module.s3_bucket_data.bucket_arn}/*"]
    actions   = ["s3:PutObject", "s3:GetObject", "s3:ListBucket"]
    principals {
      type        = "AWS"
      identifiers = [var.bar_iam]
    }
  }
}