6
votes

I have created a terraform stack for all the required resources that we utilise to build out a virtual data center within AWS. VPC, subnet, security groups etc, etc.

It all works beautifully :). I am having a constant argument with network engineers that want to have a completely separate state for networking etc. As a result of this we have to manage multiple state files and it requires 10 to 15 terraform plan/apply commands to being up the data center. Not alone do we have to run the commands multiple times, we cannot reference the module output variables from when creating ec2 instances etc, so now there are "magic" variables appearing within variable files. I want to put the scripts to create the ec2 instances, els etc within the same directory as the "data center" configuration so that we manage one state file (encrypted in s3 with dynamodb lock) and that our git repo has a one to one relationship with our infrastructure. There is also the added benefit that a single terraform plan/apply will build the whole datacenter in a single command.

Question is really, is it a good idea to manage data center resources (vpc, subnets, security groups) and compute resources in a single state file? Are there any issues that I may come across? Has anybody experience in managing an AWS environment with terraform this way?

Regards, David

1

1 Answers

6
votes

To begin with the Terraform provider let's you access output variables from other state files so you don't have to use magic variables. The rest is just a matter of your style. Do you frequently bring the whole datacenter infrastracture up? If so you may consider doing it in one project. If on the other hand you only change some things you may want to make it more modular relying on output from other projects. Keeping them separate makes the planing faster and avoids a very costly terraform destroy mistake.