4
votes

We are currently using S3 as our backend for preserving the tf state file. While executing terraform plan we are receiving the below error:

Error: Forbidden: Forbidden
        status code: 403, request id: 18CB0EA827E6FE0F, host id: 8p0TMjzvooEBPNakoRsO3RtbARk01KY1KK3z93Lwyvh1Nx6sw4PpRyfoqNKyG2ryMNAHsdCJ39E=

We have enabled the debug mode and below is the error message we have noticed.

2020-05-31T20:02:20.842+0400 [DEBUG] plugin.terraform-provider-aws_v2.64.0_x4: Accept-Encoding: gzip
2020-05-31T20:02:20.842+0400 [DEBUG] plugin.terraform-provider-aws_v2.64.0_x4: 
2020-05-31T20:02:20.842+0400 [DEBUG] plugin.terraform-provider-aws_v2.64.0_x4: 
2020-05-31T20:02:20.842+0400 [DEBUG] plugin.terraform-provider-aws_v2.64.0_x4: -----------------------------------------------------
2020/05/31 20:02:20 [ERROR] <root>: eval: *terraform.EvalRefresh, err: Forbidden: Forbidden
        status code: 403, request id: 2AB56118732D7165, host id: 5sM6IwjkufaDg1bt5Swh5vcQD2hd3fSf9UqAtlL4hVzVaGPRQgvs1V8S3e/h3ta0gkRcGI7GvBM=
2020/05/31 20:02:20 [ERROR] <root>: eval: *terraform.EvalSequence, err: Forbidden: Forbidden
        status code: 403, request id: 2AB56118732D7165, host id: 5sM6IwjkufaDg1bt5Swh5vcQD2hd3fSf9UqAtlL4hVzVaGPRQgvs1V8S3e/h3ta0gkRcGI7GvBM=
2020/05/31 20:02:20 [TRACE] [walkRefresh] Exiting eval tree: aws_s3_bucket_object.xxxxxx
2020/05/31 20:02:20 [TRACE] vertex "aws_s3_bucket_object.xxxxxx": visit complete
2020/05/31 20:02:20 [TRACE] vertex "aws_s3_bucket_object.xxxxxx: dynamic subgraph encountered errors
2020/05/31 20:02:20 [TRACE] vertex "aws_s3_bucket_object.xxxxxx": visit complete

We have tried reverting the code and tfstate file to a working version and tried. Also, deleted the tfstate file locally as well. Still the same error.

s3 bucket policy is as below:

{
            "Sid": "DelegateS3Access",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::xxxxxx:role/Administrator"
            },
            "Action": [
                "s3:ListBucket",
                "s3:GetObject",
                "s3:GetObjectTagging"
            ],
            "Resource": [
                "arn:aws:s3:::xxxxxx/*",
                "arn:aws:s3:::xxxxxx"
            ]
        }

The same role is being assumed by terraform for execution and still it fails. I have emptied the bucket policy as well and tried but didn't see any success. I understand it is something to do with the bucket policy itself, but not sure how to fix it.

Any pointers to fix this issue is highly appreciated.

2
Having your S3 bucket policy to review would help understand this. Make sure to mask your account IDs, KMS key IDs, or other personally identifiable information like person or company names with fake placeholders in the policy before you post it.Alain O'Dea
Also please post the combined IAM policy of the IAM User or IAM Role STS Session whose credentials you are supplying to Terraform when you run it.Alain O'Dea
Updated the description with bucket policy. Please check.Vamshi Siddarth

2 Answers

4
votes

One thing to check is who you are (from an AWS API perspective), before running Terraform:

aws sts get-caller-identity

If the output is like this, then you are authenticated as an IAM User who will not have access to the bucket since it grants access to an IAM Role and not an IAM User:

{
    "UserId": "AIDASAMPLEUSERID",
    "Account": "123456789012",
    "Arn": "arn:aws:iam::123456789012:user/DevAdmin"
}

In that case, you'll need to configure AWS CLI to assume arn:aws:iam::xxxxxx:role/Administrator.

[profile administrator]
role_arn = arn:aws:iam::xxxxxx:role/Administrator
source_profile = user1

Read more on that process here:

https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-role.html

If get-caller-identity returns something like this, then you are assuming the IAM Role and the issue is likely with the actions in the Bucket policy:

{
    "UserId": "AIDASAMPLEUSERID",
    "Account": "123456789012",
    "Arn": "arn:aws:iam::123456789012:assumed-role/Administrator/role-session-name"
}

According to the Backend type: S3 documentation, you also need s3:PutObject:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "s3:ListBucket",
      "Resource": "arn:aws:s3:::mybucket"
    },
    {
      "Effect": "Allow",
      "Action": ["s3:GetObject", "s3:PutObject"],
      "Resource": "arn:aws:s3:::mybucket/path/to/my/key"
    }
  ]
}

While I can't see why PutObject would be needed for plan, it is conceivably what is causing this Forbidden error.

You can also look for denied S3 actions in CloudTrail if you have enabled that.

2
votes

The issue is fixed now. We have performed a s3 copy action prior to this which had copied all the s3 objects from account A to account B. The issue here is copy command always moves objects along with the same user permissions which made the current user role not able to access these newly copied objects resulting in Forbidden 403 error.

We have cleared all the objects in this bucket and run the aws sync command instead of cp which fixed the issue for us. Thank you Alain for the elaborated explanation. Those surely helped us in fixing this issue.

This helped us point to right issue.

Steps followed:

  1. Backup all the s3 objects.
  2. Empty the bucket.
  3. Run terraform plan.
  4. Once the changes are made to the bucket, run aws sync command.