The answer to this question typically depends on whether you are talking about multiple deployment stages for the Terraform configuration itself, or multiple deployment stages for whatever applications/services you'll be running on the Terraform-managed infrastructure.
One way to think about this distinction is to think about what you'll be using the multiple stages to achieve. If your goal is to have somewhere to try running terraform apply before you do it in production then you're talking about multiple deployment stages of the Terraform configuration. If your goal is to create a long-lived staging environment for you to deploy your application/service into then typically the staging environment is also "production" from the perspective of your deployment pipeline, and so should typically be treated as such.
To test your Terraform configurations prior to applying them to "real infrastructure", you can use Terraform CLI workspaces to create temporary additional states associated with your configuration, so you can try applying changes without affecting the main infrastructure represented in the "default" workspace:
terraform workspace new temp-test to create a temporary workspace.
- Use your version control system to select whichever commit was most recently applied in the
default workspace. This will typically be on the main branch of your version control repository, although depending on how you use VCS you might need to select an earlier commit to exclude any changes that haven't been applied to the real system yet.
terraform apply to create equivalent infrastructure to the default workspace to be the basis for your test.
- Use your version control system to switch back to the configuration you're intending to test. This will typically be a feature branch in your repository, possibly attached to a pull request. In a real version of this workflow, it might be helpful to name your temporary workspace after your branch name so your colleagues can easily see which branches and workspaces go together.
terraform apply again to plan and apply the changes represented by the new configuration.
- If the apply succeeded, inspect the infrastructure it created to make sure it's functioning in the way you intended.
- When you're done:
terraform destroy to destroy the temp-test infrastructure
terraform workspace select to get back to the default workspace
terraform workspace delete temp-test to delete the temporary workspace
For this to work you'll need to be careful to avoid colliding with existing production objects in situations where a remote system requires unique names. For systems that have a sense of accounts with separate namespaces a common choice would be to use a different account and entirely separate credentials for the test, which then means you can use the remote system's access controls to avoid accidental disruption of the "real" infrastructure.
To create a long-lived staging or development environment for testing some higher-level component during its own deployment pipeline calls for a different strategy: in that case, the infrastructure supporting the staging environment is part of "production" as far as the application's deployment process is concerned, and so should typically be modeled as such.
To achieve that while ensuring that the two infrastructure stacks remain equivalent aside from intentional differences, factor your common infrastructure code out into one or more modules and then call those modules once for the production infrastructure and again for each other environment's infrastructure.
Depending on the intended failure domains for your system you might choose to represent both the production and staging infrastructure together in a single configuration, containing two calls to the same module:
module "network-production" {
source = "../modules/network"
cidr_block = "10.1.0.0/16"
# etc...
}
module "network-production" {
source = "../modules/network"
cidr_block = "10.2.0.0/16"
# etc...
}
or, to ensure that they are both independently maintainable, you could write two separate configurations that both call into the same module and terraform apply them separately.
In both cases, the idea here is to represent the inevitable differences between your different environments using input variables to your common modules, but keep the declared resources the same in both cases, but to treat both of them as "production infrastructure" from the application pipeline's perspective by keeping them both in a namespace named default, either in one configuration or across multiple, and by using the same main branch in your version control to represent the "latest versions" of both of them at once.
If you want to test changes to your configurations that together represent all of the environments your application's pipeline rely on, you can combine these two approaches by creating a temporary workspace representing either your entire stack or one particular environment and applying configuration from a branch into that stack before you merge it to the main branch and apply it to the default workspace.
This answer is an elaboration of the guidance in the Terraform documentation When to use Multiple Workspaces.
Ultimately though, this is a situation where you have a few different options with different tradeoffs and I'd encourage you to review this advice and the other related context in the documentation and decide for yourself which of the advice best matches your needs. Terraform is a general tool intended for solving a variety of different problems, and ultimately only you can map your specific set of requirements onto Terraform's features.