I'm trying to instantiate 3 aws_instances that are aware of each other ip address via Terraform. This of course results in a cyclic dependency. I was wondering what is the best way to overcome this problem. I've tried a couple of solutions:
- Instantiate 2 instances together, and then 1 instance that depends on those 2. In the third instance, have a user_data script that allows the instance to ssh into the other 2 instances to setup the necessary configs.
It works, but I don't like the fact that it creates 2 distinct groups of resources even though the 3 instances are for all intent and purpose identical after init.
- Have the 3 instances instantiated at the same time, then instantiate another instance whose sole purpose is to ssh into each instances to setup the necessary configs. Once the init is done, the additional instance should terminate itself.
It also works, but then terraform will see the 4th resource as a terminated resource and will try to recreate it whenever there is an update so this is not very clean.
Any recommendations? Thanks.
EDIT:
Here is an attempt that doesn't work with remote-exec to illustrate the cyclic dependency:
resource "aws_instance" "etcd" {
count = 3
ami = data.aws_ami.ubuntu.id
instance_type = "t3.micro"
subnet_id = module.vpc.public_subnets[count.index].id
provisioner "remote-exec" {
inline = [
"echo ${aws_instance.etcd[0].private_ip}",
"echo ${aws_instance.etcd[1].private_ip}",
"echo ${aws_instance.etcd[2].private_ip}"
]
}
}
local-exec
and use aws cli to execute ssm run command on the instances. Have you considered these options? – Marcin