27
votes

I have a Terraform configuration targeting deployment on AWS. It applies beautifully when using an IAM user that has permission to do anything (i.e. {actions: ["*"], resources: ["*"]}.

In pursuit of automating the application of this Terraform configuration, I want to determine the minimum set of permissions necessary to apply the configuration initially and effect subsequent changes. I specifically want to avoid giving overbroad permissions in policy, e.g. {actions: ["s3:*"], resources: ["*"]}.

So far, I'm simply running terraform apply until an error occurs. I look at the output or at the terraform log output to see what API call failed and then add it to the deployment user policy. EC2 and S3 are particularly frustrating because the name of the actions seems to not necessarily align with the API method name. I'm several hours into this with easy way to tell how far long I am.

Is there a more efficient way to do this?

It'd be really nice if Terraform advised me what permission/action I need but that's a product enhancement best left to Hashicorp.

5
Note that applying these from a clean slate will not give you the total permissions needed to manage these resources! Consider updating or deleting these resources in the future... you may need additional permissions to do these actions.Eric M. Johnson
That's a very important distinction, @EricJohnson. Thanks for pointing that out. I'd love recommendations on how to account for that, as well.Colin Dean

5 Answers

4
votes

While I still believe that such super strict policy will be a continuous pain and likely kill productivity (but might depend on the project), there is now a tool for this.

iamlive uses the Client Side Monitoring feature of the AWS SDK to create a minimal policy based on the executed API calls. As Terraform uses the AWS SDK, this works here as well.

In contrast to my previous (and accepted) answer, iamlive should even get the actual IAM actions right, which not necessarily match the API calls 1:1 (and which would be logged by CloudTrail).

36
votes

Here is another approach, similar to what was said above, but without getting into CloudTrail -

  1. Give full permissions to your IAM user.
  2. Run TF_LOG=trace terraform apply --auto-approve &> log.log
  3. Run cat log.log | grep "DEBUG: Request"

You will get a list of all AWS Actions used.

6
votes

EDIT Feb 2022: there is a better way using iamlive and client side monitoring. Please see my other answer.

As I guess that there's no perfect solution, treat this answer a bit as result of my brain storming. At least for the initial permission setup, I could imagine the following:

Allow everything first and then process the CloudTrail logs to see, which API calls were made in a terraform apply / destroy cycle.

Afterwards, you update the IAM policy to include exactly these calls.

3
votes

Efficient way I followed.

The way I deal with is, allow all permissions (*) for that service first, then deny some of them if not required.

For example

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowSpecifics",
            "Action": [
                "ec2:*",
                "rds:*",
                "s3:*",
                "sns:*",
                "sqs:*",
                "iam:*",
                "elasticloadbalancing:*",
                "autoscaling:*",
                "cloudwatch:*",
                "cloudfront:*",
                "route53:*",
                "ecr:*",
                "logs:*",
                "ecs:*",
                "application-autoscaling:*",
                "logs:*",
                "events:*",
                "elasticache:*",
                "es:*",
                "kms:*",
                "dynamodb:*"
            ],
            "Effect": "Allow",
            "Resource": "*"
        },
        {
            "Sid": "DenySpecifics",
            "Action": [
                "iam:*User*",
                "iam:*Login*",
                "iam:*Group*",
                "iam:*Provider*",
                "aws-portal:*",
                "budgets:*",
                "config:*",
                "directconnect:*",
                "aws-marketplace:*",
                "aws-marketplace-management:*",
                "ec2:*ReservedInstances*"
            ],
            "Effect": "Deny",
            "Resource": "*"
        }
    ]
}

You can easily adjust the list in Deny session, if terraform doesn't need or your company doesn't use some aws services.

enter image description here

-1
votes

The method you are attempting is a bit unusual in cloud technology. Instead of giving fine grained control to the terraform user running the API calls, you could run terraform on an EC2 instance with an EC2 IAM profile that allows the instance itself to call the API's with the correct set of AWS service permissions.

Each call is made to a specific API endpoint. Some terraform operations applied during a apply would be different if you pushed an update. You also need to know what happens during a modify operation where a resource is updated in place.

Instead of restricting that API key to specifics, let it have more leeway, if it needs to create and destroy EC2, let it have EC2 full perms maybe with conditionals to restrict to account level or VPC.