I have some resources whose count is parameterised by a variable. This is used to create VM resources as well as null_resources for e.g. running deployment scripts on them. When I reduce the value of the count from 2 to 1 and apply, I get an error.
Terraform executes plan with no complaints. But when I apply, it tells me there is a cycle:
Error: Cycle: null_resource.network_connection_configuration[7] (destroy), null_resource.network_connection_configuration[8] (destroy), null_resource.network_connection_configuration[3] (destroy), null_resource.network_connection_configuration[4] (destroy), null_resource.network_connection_configuration[0] (destroy), null_resource.network_connection_configuration[6] (destroy), null_resource.network_connection_configuration[1] (destroy), null_resource.network_connection_configuration[9] (destroy), null_resource.network_connection_configuration[2] (destroy), null_resource.network_connection_configuration[10] (destroy), hcloud_server.kafka[2] (destroy), local.all_machine_ips, null_resource.network_connection_configuration (prepare state), null_resource.network_connection_configuration[5] (destroy)
Here is the relevant part of the file:
variable kafka_count {
default = 3
}
resource "hcloud_server" "kafka" {
count = "${var.kafka_count}"
name = "kafka-${count.index}"
image = "ubuntu-18.04"
server_type = "cx21"
}
locals {
all_machine_ips = "${hcloud_server.kafka.*.ipv4_address)}"
}
resource "null_resource" "network_connection_configuration" {
count = "${length(local.all_machine_ips)}"
triggers = {
ips = "${join(",", local.all_machine_ips)}"
}
depends_on = [
"hcloud_server.kafka"
]
connection {
type = "ssh"
user = "deploy"
host = "${element(local.all_machine_ips, count.index)}"
port = 22
}
// ... some file provisioners
}
When I try to find the cycle using the visualisation:
terraform graph -verbose -draw-cycles
There are no cycles visible.
When I use TF_LOG=1 the debug log doesn't show any errors
So the issue is that I can increase the count but not decrease it. I don't want to manally hack the file as it means I won't be able to scale down in future! I'm using Terraform v0.12.1.
Are there any strategies for debugging this situation?
depends_on, you can still pick up implicit dependencies when you map outputs from one resource or data as an input to another resource. Also you could post a MCVE for this since the error message only references two resources. - Matt Schuchardnull_resource.files_synclooks like the culprit. - Matt Schuchardplanthis would be a lot eaiser! - Joenull_resourcesto let them get destroyed (which is safe because they don't really exist). With those not existing I can safely change the count to scale down the 'real' resources. Not satisfactory, but I have a suspicion that it could be connected to github.com/hashicorp/terraform/issues/21662 / stackoverflow.com/questions/56514719/… - Joe