14
votes

I try to implement aws lambda function using terraform.

I simply have null_resource that have local provisioner and resource.archive_file that zips source code after all preparation is done.

resource "null_resource" "deps" {

  triggers = {
    package_json = "${base64sha256(file("${path.module}/src/package.json"))}"
  }

  provisioner "local-exec" {
    command = "cd ${path.module}/src && npm install"
  }
}

resource "archive_file" "function" {
    type = "zip"
    source_dir = "${path.module}/src"
    output_path = "${path.module}/function.zip"

    depends_on = [ "null_resource.deps" ]
}

Recent changes to Terraform deprecated resource.archive_file, so data.archive_file should be used instead. Unfortunately, data executes before resources, and so local provisioner from dependent resource is called way after zip is created. So code bellow does not produce warning any more, however not working at all.

resource "null_resource" "deps" {

  triggers = {
    package_json = "${base64sha256(file("${path.module}/src/package.json"))}"
  }

  provisioner "local-exec" {
    command = "cd ${path.module}/src && npm install"
  }
}

data "archive_file" "function" {
    type = "zip"
    source_dir = "${path.module}/src"
    output_path = "${path.module}/function.zip"

    depends_on = [ "null_resource.deps" ]
}

Am I missing something? What is correct way to do this with recent versions.

Terraform: v0.7.11 OS: Win10

2
I think you are right, and it is not possible to do using just Terraform.Anton Babenko
The example in the question works fine for me. You can use the working_dir option instead of prefixing it with cd.Yep_It's_Me

2 Answers

10
votes

Turns out there is an issue with the way Terraform core handles depends_on for data resources. There are a couple of issues reported, one in the archive provider and another in the core.

The following workaround is listed in the archive provider issue. Note that it uses a data.null_data_source to sit between the null_resource and data.archive_file which makes it an explicit dependency as opposed to an implicit dependency with depends_on.

resource "null_resource" "lambda_exporter" {
  # (some local-exec provisioner blocks, presumably...)

  triggers = {
    index = "${base64sha256(file("${path.module}/lambda-files/index.js"))}"
  }
}

data "null_data_source" "wait_for_lambda_exporter" {
  inputs = {
    # This ensures that this data resource will not be evaluated until
    # after the null_resource has been created.
    lambda_exporter_id = "${null_resource.lambda_exporter.id}"

    # This value gives us something to implicitly depend on
    # in the archive_file below.
    source_dir = "${path.module}/lambda-files/"
  }
}

data "archive_file" "lambda_exporter" {
  output_path = "${path.module}/lambda-files.zip"
  source_dir  = "${data.null_data_source.wait_for_lambda_exporter.outputs["source_dir"]}"
  type        = "zip"
}
2
votes

There is a new data source in Terraform 0.8, external that allows you to run external commands and extract output. See data.external

The data source should only be used for the retrieval of some depedency value, not the execution of the npm install, you should still do that via the null_resource. Since this is a Terraform data source, it should not have any side effects (although you may need some in this case, not sure).

So basically, null_resource does the dependencies, data.external grabs some value that you can depend on for the archive (directory path for example), then data.archive_file performs the archiving.

This would probably work best with a pseudo random directory name potentially to make dirty checks work a little cleaner.