0
votes

I have created a gcp kubernetes cluster using terraform and configured a few kubernetes resources such as namespaces and helm releases. I would like terraform to automatically destroy/recreate all the kubernetes cluster resources if the gcp cluster is destroyed/recreated but I cant seem to figure out how to do it.

The behavior I am trying to recreate is similar to what you would get if you used triggers with null_resources. Is this possible with normal resources?

resource "google_container_cluster" "primary" {
  name               = "marcellus-wallace"
  location           = "us-central1-a"
  initial_node_count = 3


resource "kubernetes_namespace" "example" {
  metadata {
    annotations = {
      name = "example-annotation"
    }

    labels = {
      mylabel = "label-value"
    }

    name = "terraform-example-namespace"

    #Something like this, but this only works with null_resources
    triggers {
       cluster_id = "${google_container_cluster.primary.id}" 
     }
  }
}

1
I have indeed looked at that, and from what I can see that argument will only affect the order the resources are deployed in. If in the above plan I changed a variable in google_container_cluster that would cause it to be redeployed, terraform wouldn't automatically pick up that the namespace would also need to be redeployed, even if the depends_on argument was there. Is there any other way to achieve this?BigSmoke

1 Answers

0
votes

In your specific case, you don't need to specify any explicit dependencies. They will be set automatically because you have cluster_id = "${google_container_cluster.primary.id}" in your second resource.

In case when you need to set manual dependency you can use depends_on meta-argument.