0
votes

I am using Terraform with Kubernetes Provider. Now when creating ConfigMaps, I want their names to have a content suffix. Usually it is a hash of the content.

This way, it should enforce a Deployment, where used.

So I would like it to work similar to:

resource "kubernetes_config_map" "prometheus_config" {
  metadata {
    name      = "prometheus-" + computeHash(file("${path.module}/alerts.yml"), file("${path.module}/recordings.yml"), "abcd")
  }

  data = {
    "foo" = file("${path.module}/alerts.yml")
    "bar" = file("${path.module}/recordings.yml")
    "txt" = "abcd"
  }
}

Is there any way of implementing a custom function like computeHash? Or to achieve this another way?

3

3 Answers

1
votes

There is no way to implement a custom hash function in Terraform, but Terraform has a number of built-in functions that implement different standard hash functions.

For example, to use a base64-encoded SHA256 hash you could write something like the following using the function base64sha256:

  name = "prometheus-" + base64sha256(join("\n", [
    file("${path.module}/alerts.yml"),
    file("${path.module}/recordings.yml"),
    "abcd",
  ])

Because the file function returns a string, all of the referenced files must contain valid UTF-8 text. The hash will then be of the UTF-8 encoding of the unicode characters in the files.

The documentation page for base64sha256 includes navigation links to various other "Hash and Crypto Functions", some of which implement other hashing algorithms.

If your goal is to just include everything in the data map, you could avoid the duplication by factoring that out into a Local Value and then hash a string representation of the map, such as a JSON serialization:

locals {
  prometheus_config = {
    "foo" = file("${path.module}/alerts.yml")
    "bar" = file("${path.module}/recordings.yml")
    "txt" = "abcd"
  }
}

resource "kubernetes_config_map" "prometheus_config" {
  metadata {
    name = "prometheus-" + base64sha256(jsonencode(local.prometheus_config))
  }

  data = local.prometheus_config
}
1
votes

The kubernetes_config_map resource returns a resource_version attribute as part of the metadata. As described in the linked docs:

An opaque value that represents the internal version of this config map that can be used by clients to determine when config map has changed. For more info see Kubernetes reference

You can use this to trigger a deploy by interpolating based on this value in your kubernetes_deployment resource directly.

I personally put the value into an environment variable in the container spec which then triggers the deployment to redeploy when the config map changes. Tweaking the example given in the kubernetes_deployment docs this gives:

resource "kubernetes_deployment" "example" {
  metadata {
    name = "terraform-example"
    labels = {
      test = "MyExampleApp"
    }
  }

  spec {
    replicas = 3

    selector {
      match_labels = {
        test = "MyExampleApp"
      }
    }

    template {
      metadata {
        labels = {
          test = "MyExampleApp"
        }
      }

      spec {
        container {
          image = "nginx:1.7.8"
          name  = "example"

          env = [
            {
              name  = "configmap"
              value = "${kubernetes_config_map.example.metadata.0.resource_version}"
            },
          ]

          resources {
            limits {
              cpu    = "0.5"
              memory = "512Mi"
            }
            requests {
              cpu    = "250m"
              memory = "50Mi"
            }
          }

          liveness_probe {
            http_get {
              path = "/nginx_status"
              port = 80

              http_header {
                name  = "X-Custom-Header"
                value = "Awesome"
              }
            }

            initial_delay_seconds = 3
            period_seconds        = 3
          }
        }
      }
    }
  }
}

It is worth noting that this approach currently has the unfortunate behaviour of requiring 2 applys to trigger the deployment as Terraform only sees a change to the config map in the first apply but then a follow up plan will show that the deployment spec's container env has changed and this triggers a deployment. I've not dug into this enough to see why the Kubernetes provider works this way as Terraform should be able to see that the deployment is dependent on the config map and that is going to change.

0
votes

If you deployment is also in Terraform you can easily achieve it by doing a hash of configmap data in inside deployment's label or env i.e.:

env {
  name  = "prometheus_cfgmap_version"
  value = base64sha256(jsonencode(kubernetes_config_map.prometheus_config.data))
}

If deployment is outside terraform you can also do it directly inside deployment object i.e.:

env:
  - name: CONFIG_HASH
    valueFrom:
      fieldRef:
        fieldPath: spec.template.metadata.annotations.configHash