54
votes

I've been using Terraform to build my AWS stack and have been enjoying it. If it was to be used in a commercial setting the configuration would need to be reused for different environments (e.g. QA, STAGING, PROD).

How would I be able to achieve this? Would I need to create a wrapper script that makes calls to terraform's cli while passing in different state files per environment like below? I'm wondering if there's a more native solution provided by Terraform.

terraform apply -state=qa.tfstate
8
github.com/unfor19/terraform-multienv - Since there's already an accepted answer and this question is quite old, I'm adding it here as a comment. I've created a template for maintaining a multiple environment infrastructure with Terraform. This template includes a CI/CD process, that applies the infrastructure in an AWS account.Meir Gabay

8 Answers

28
votes

I suggest you take a look at the hashicorp best-practices repo, which has quite a nice setup for dealing with different environments (similar to what James Woolfenden suggested).

We're using a similar setup, and it works quite nicely. However, this best-practices repo assumes you're using Atlas, which we're not. We've created quite an elaborate Rakefile, which basically (going by the best-practices repo again) gets all the subfolders of /terraform/providers/aws, and exposes them as different builds using namespaces. So our rake -T output would list the following tasks:

us_east_1_prod:init
us_east_1_prod:plan
us_east_1_prod:apply

us_east_1_staging:init
us_east_1_staging:plan
us_east_1_staging:apply

This separation prevents changes which might be exclusive to dev to accidentally affect (or worse, destroy) something in prod, as it's a different state file. It also allows testing a change in dev/staging before actually applying it to prod.

Also, I recently stumbled upon this little write up, which basically shows what might happen if you keep everything together: https://charity.wtf/2016/03/30/terraform-vpc-and-why-you-want-a-tfstate-file-per-env/

17
votes

Paul's solution with modules is the right idea. However, I would strongly recommend against defining all of your environments (e.g. QA, staging, production) in the same Terraform file. If you do, then whenever you're making a change to staging, you risk accidentally breaking production too, which partially defeats the point of keeping those environments isolated in the first place! See Terraform, VPC, and why you want a tfstate file per env for a colorful discussion of what can go wrong.

I always recommend storing the Terraform code for each environment in a separate folder. In fact, you may even want to store the Terraform code for each "component" (e.g. a database, a VPC, a single app) in separate folders. Again, the reason is isolation: when making changes to a single app (which you might do 10 times per day), you don't want to put your entire VPC at risk (which you probably never change).

Therefore, my typical file layout looks something like this:

stage
  └ vpc
     └ main.tf
     └ vars.tf
     └ outputs.tf
  └ app
  └ db
prod
  └ vpc
  └ app
  └ db
global
  └ s3
  └ iam

All the Terraform code for the staging environment goes into the stage folder, all the code for the prod environment goes into the prod folder, and all the code that lives outside of an environment (e.g. IAM users, S3 buckets) goes into the global folder.

For more info, check out How to manage Terraform state. For a deeper look at Terraform best practices, check out the book Terraform: Up & Running.

14
votes

Please note that from version 0.10.0 now Terraform supports the concept of Workspaces (environments in 0.9.x).

A workspace is a named container for Terraform state. With multiple workspaces, a single directory of Terraform configuration can be used to manage multiple distinct sets of infrastructure resources.

See more info here: https://www.terraform.io/docs/state/workspaces.html

6
votes

As you scale up your terraform usage, you will need to share state (between devs, build processes and different projects), support multiple environments and regions. For this you need to use remote state. Before you execute your terraform you need to set up your state. (Im using powershell)

$environment="devtestexample"
$region     ="eu-west-1"
$remote_state_bucket = "${environment}-terraform-state"
$bucket_key = "yoursharedobject.$region.tfstate"

aws s3 ls "s3://$remote_state_bucket"|out-null
if ($lastexitcode)
{
   aws s3 mb "s3://$remote_state_bucket"
}

terraform remote config -backend S3 -backend-config="bucket=$remote_state_bucket"  -backend-config="key=$bucket_key" -backend-config="region=$region"
#(see here: https://www.terraform.io/docs/commands/remote-config.html)

terraform apply -var='environment=$environment' -var='region=$region'

Your state is now stored in S3, by region, by environment, and you can then access this state in other tf projects.

3
votes

No need to make a wrapper script. What we do is split our env into a module and then have a top level terraform file where we just import that module for each environment. As long as you have your module setup to take enough variables, generally env_name and a few others, you're good. As an example

# project/main.tf
module "dev" {
    source "./env"

    env = "dev"
    aws_ssh_keyname = "dev_ssh"
}

module "stage" {
    source "./env"

    env = "stage"
    aws_ssh_keyname = "stage_ssh"
}

# Then in project/env/main.tf
# All the resources would be defined in here
# along with variables for env and aws_ssh_keyname, etc.

Edit 2020/03/01

This answer is pretty old at this point, but it's worth updating. The critique that dev and stage sharing the same state file being bad is a matter of perspective. For the exact code provided above it's completely valid because dev and stage are sharing the same code as well. Thus "breaking dev will wreck your stage," is correct. The critical thing that I didn't note when writing this answer was the source "./env" could also be written as source "git::https://example.com/network.git//modules/vpc?ref=v1.2.0"

Doing that makes your entire repo become something of a submodule to the TF scripts allowing you to split out one branch as your QA branch and then tagged references as your Production envs. That obviates the problem of wrecking your staging env with a change to dev.

Next state file sharing. I say that's a matter of perspective because with one single run it's possible to update all your environments. In a small company that time savings when promoting changes can be useful, some trickery with --target is usually enough to speed up the process if you're careful, if that's even really needed. We found it less error prone to manage everything from one place and one terraform run, rather than having multiple different configurations possibly being applied slightly differently across the environments. Having them all in one state file forced us to be more disciplined about what really needed to be a variable v.s. what was just overkill for our purposes. It also very strongly prevented us from allowing our environments to drift too far apart from each other. When you terraform plan outputs show 2k lines, and the differences are mainly because dev and stage look nothing like prod the frustration factor alone encouraged our team to bring that back to sanity.

A very strong counter argument to that is if you're in a large company where various compliance rules prevent you from touching dev / stage / prod at the same time. In that scenario it's better to split up your state files, just make sure that how you're running terraform apply is scripted. Otherwise you run the very real risk of those state files drifting apart when someone says "Oh I just need to --target just this one thing in staging. We'll fix it next sprint, promise." I've seen that spiral quickly multiple times now, making any kind of comparison between the environments questionable at best.

2
votes

Form terraform version 0.10+ There is a way to maintain the state file using Workspace command

   $ terraform workspace list // The command will list all existing workspaces
   $ terraform workspace new <workspace_name> // The command will create a workspace
   $ terraform workspace select <workspace_name> // The command select a workspace
   $ terraform workspace delete <workspace_name> // The command delete a workspace

First thing you need to do is to create each workspace for your environment

   $ terraform workspace new dev

Created and switched to workspace "dev"!

You're now on a new, empty workspace. Workspaces isolate their state, so if you run "terraform plan" Terraform will not see any existing state for this configuration.

  $terraform workspace new test

Created and switched to workspace "test"!

You're now on a new, empty workspace. Workspaces isolate their state, so if you run "terraform plan" Terraform will not see any existing state for this configuration.

 $terraform workspace new stage

Created and switched to workspace "stage"!

You're now on a new, empty workspace. Workspaces isolate their state, so if you run "terraform plan" Terraform will not see any existing state for this configuration.

backend terraform.tfstate.d directory will be created

under them you can see 3 directory - dev, test,stage and each will maintain its state file under its workspace.

all that now you need to do is to move the env-variable files in another folder

keep only one variable file for each execution of terraform plan , terraform apply

   main.tf
   dev_variable.tfvar
   output.tf

Remember to switch to right workspace to use the correct environment state file

   $terraform workspace select test
   main.tf
   test_variable.tfvar
   output.tf

Ref : https://dzone.com/articles/manage-multiple-environments-with-terraform-worksp

1
votes

There are plenty of good answers in this thread. Let me contribute as well with an idea that worked for me and some other teams.

The idea is to have a single "umbrella" project that contains the whole infrastructure code.

Each environment's terraform file includes just a single module - the "main".

Then "main" will include resources and other modules

- terraform_project
- env
  - dev01        <-- Terraform home, run from here 
    - .terraform    <-- git ignored of course
    - dev01.tf  <-- backend, env config, includes always _only_ the main module
  - dev02
    - .terraform
    - dev02.tf 
  - stg01
    - .terraform
    - stg01.tf
  - prd01
    - .terraform
    - prd01.tf 
- main        <-- main umbrella module
   - main.tf
   - variables.tf         
- modules         <-- submodules of the main module
  - module_a
  - module_b
  - module_c

And a sample environment home file (e.g. dev01.tf) will look like this

provider "azurerm" {
  version = "~>1.42.0"
}

terraform {
  backend "azurerm" {
    resource_group_name  = "tradelens-host-rg"
    storage_account_name = "stterraformstate001"
    container_name       = "terraformstate"
    key                  = "dev.terraform.terraformstate"
  }
}

module "main" {
  source               = "../../main"
  subscription_id      = "000000000-0000-0000-0000-00000000000"
  project_name         = "tlens"
  environment_name     = "dev"
  resource_group_name  = "tradelens-main-dev"
  tenant_id            = "790fd69f-41a3-4b51-8a42-685767c7d8zz"
  location             = "West Europe"
  developers_object_id = "58968a05-dc52-4b69-a7df-ff99f01e12zz"
  terraform_sp_app_id  = "8afb2166-9168-4919-ba27-6f3f9dfad3ff"

  kubernetes_version      = "1.14.8"
  kuberenetes_vm_size     = "Standard_B2ms"
  kuberenetes_nodes_count = 4

  enable_ddos_protection = false
  enable_waf             = false
}

Thanks to that you:

  • Can have separate backends for Terraform remote state-files per environment
  • Be able to use separate system accounts for different environments
  • Be able to use different versions of providers and Terraform itself per environment (and upgrade one by one)
  • Ensure that all required properties are provided per environment (Terraform validate won't pass if an environmental property is missing)
  • Ensure that all resources/modules are always added to all environments. It is not possible to "forget" about a whole module because there is just one.

Check a source blog post

1
votes

There is absolutely no need for having separate codebases for dev and prod environments. Best practice dictates (I mean DRY) that actually you are better off having one code base and simply parametrize it as you would do normally when developing a software - you DONT have separate folders for development version of the application and for production version of the application. You only need to ensure right deployment scheme. The same goes with terraform. Consider this "Hello world" idea:

terraform-project
├── etc
│   ├── backend
│   │   ├── dev.conf
│   │   └── prod.conf
│   └── tfvars
│       ├── dev.tfvars
│       └── prod.tfvars
└── src
    └── main.tf

contents of etc/backend/dev.conf

storage_account_name = "tfremotestates"
container_name       = "tf-state.dev"
key                  = "terraform.tfstate"
access_key           = "****"

contents of etc/backend/prod.conf

storage_account_name = "tfremotestates"
container_name       = "tf-state.prod"
key                  = "terraform.tfstate"
access_key           = "****"

contents of etc/tfvars/dev.tfvars

environment = "dev"

contents of etc/tfvars/prod.tfvars

environment = "prod"

contents of src/main.tf

terraform {
  backend "azurerm" {
  }
}

provider "azurerm" {
  version = "~> 2.56.0"
  features {}
}

resource "azurerm_resource_group" "rg" {
  name     = "rg-${var.environment}"
  location = "us-east"
}

Now you only have to pass apropriate value to cli invocation, eg.:

export ENVIRONMENT=dev
terraform init -backend-config=etc/backends/${ENVIRONMENT}.conf
terraform apply -vars-file=etc/tfvars/${ENVIRONMENT}.tfvars

This way:

  • we have separate state files for each environment (so they can even be deployed in different subscriptions/accounts)
  • we have the same code base, so we are sure the differences between dev and prod are small and we can rely on dev for testing purposes before going live
  • we follow DRY directive
  • we follow KISS directive
  • no need to use obscure "workspaces" interface!

Of course in order for this to be fully secure you should incorporate some kind of git flow and code review, perhaps some static or integration testing, automatic deployment process, etc. etc.. But I consider this solution as the best approach to having multiple terraform environments without duplicating code and it worked for us very nicely for a couple of years now.