I'm using terraform to create an EC2 instance which will be used as a docker host. This means I need to create encryption keys to securely connect to it over the internet. When creating the keys you need to specify the IP address and hostnames you will be connecting with. In terraform these values can be dynamically allocated, but this easily results in a cyclic dependency situation. Lets use an example:
resource "tls_private_key" "example" {
algorithm = "ECDSA"
}
resource "tls_self_signed_cert" "docker_host_key" {
key_algorithm = "${tls_private_key.example.algorithm}"
private_key_pem = "${tls_private_key.example.private_key_pem}"
validity_period_hours = 12
early_renewal_hours = 3
allowed_uses = ["server_auth"]
dns_names = [ "${aws_instance.example.public_dns}" ]
ip_addresses = [ "${aws_instance.example.public_ip}" ]
subject {
common_name = "example.com"
organization = "example"
}
}
resource "aws_instance" "example" {
count = 1
ami = "ami-d05e75b8"
instance_type = "t2.micro"
subnet_id = "subnet-24h4fos9"
associate_public_ip_address = true
provisioner "remote-exec" {
inline = [
"echo \"${tls_self_signed_cert.docker_host_key.private_key_pem}\" > private_key_pem",
"echo \"${tls_self_signed_cert.docker_host_key.cert_pem}\" > cert_pem",
"echo \"${tls_private_key.docker_host_key.private_key_pem}\" > private_key_pem2",
]
}
}
In the remote-exec provisioner we need to write values from the tls_self_signed_cert resource, which in turn needs values from the aws_instance resource.
How can I overcome this situation?