6
votes

If one has two AWS accounts, one for development and one for live (for example) I am aware that one can use terraform workspaces to manage the state of each environment.

However, if i switch workspace from "dev" to "live" is there a way to tell terraform it should now be applying the state to the live account rather than the test one?

One way I thought of, which is error prone, would be swap my secret.auto.tfvars file each time i switch workspace since I presume when running with a different access key (the one belonging to the "live" account) the AWS provider will then be applying to that account. However, it'd be very easy to swap workspace and have the wrong credentials present which would run the changes against the wrong environment.

I'm looking for a way to almost link a workspace with an account id in AWS.

I did find this https://github.com/hashicorp/terraform/issues/13700 but it refers to the deprecated env command, this comment looked somewhat promising in particular

Update

I have found some information on GitHub where I left this comment as a reply to an earlier comment which recommended considering modules instead of workspaces and actually indicates that workspaces aren't well suited to this task. If anyone can provide information on how modules could be used to solve this issue of maintaining multiple versions of the "same" infrastructure concurrently I'd be keen to see how this improves upon the workspace concept.

2

2 Answers

6
votes

Here's how you could use Terraform modules to structure your live vs dev environments that point to different AWS accounts, but the environments both have/use the same Terraform code.

This is one (of many) ways that you could structure your dirs; you could even put the modules into their own Git repo, but I'm going to try not to confuse things too much. In this example, you have a simple app that has 1 EC2 instance and 1 RDS database. You write whatever Terraform code you need in the modules/*/ subdirs, making sure to parameterize whatever attributes are different across environments.

Then in your dev/ and live/ dirs, main.tf should be the same, while provider.tf and terraform.tfvars reflect environment-specific info. main.tf would call the modules and pass in the env-specific params.

modules/
|-- ec2_instance/
|-- rds_db/
dev/
|-- main.tf             # --> uses the 2 modules
|-- provider.tf         # --> has info about dev AWS account
|-- terraform.tfvars    # --> has dev-specific values
live/
|-- main.tf             # --> uses the 2 modules
|-- provider.tf         # --> has info about live/prod AWS account
|-- terraform.tfvars    # --> has prod-specific values

When you need to plan/apply either env, you drop into the appropriate dir and run your TF commands there.


As for why this is preferred over using Terraform Workspaces, the TF docs explains it well:

In particular, organizations commonly want to create a strong separation between multiple deployments of the same infrastructure serving different development stages (e.g. staging vs. production) or different internal teams. In this case, the backend used for each deployment often belongs to that deployment, with different credentials and access controls. Named workspaces are not a suitable isolation mechanism for this scenario.

Instead, use one or more re-usable modules to represent the common elements, and then represent each instance as a separate configuration that instantiates those common elements in the context of a different backend. In that case, the root module of each configuration will consist only of a backend configuration and a small number of module blocks whose arguments describe any small differences between the deployments.

BTW> Terraform merely changed the env subcommand to workspace when they decided that 'env' was a bit too confusing.

Hope this helps!

1
votes

Terraform workspaces hold the state information. They connect to user accounts based on the way in which AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are used in the environment. In order to link any given workspace to an AWS account they'd have to store user credentials in some kind of way, which they understandably don't. Therefore, I would not expect to see workspaces ever directly support that.

But, to almost link a workspace to an account you just need to automatically switch AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY each time the workspace is switched. You can do this by writing a wrapper around terraform which:

  1. Passes all commands on to the real terraform unless it finds workspace select in the command line.

  2. Upon finding workspace select in the command line it parses out the workspace name.

  3. Export the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY for the AWS account which you want to link to the workspace

  4. Finish by passing on the workspace command to the real terraform

This would load in the correct credentials each time

terraform workspace select <WORKSPACE>

was used