2
votes

I'm setting up some Terraform to manage a lambda and s3 bucket with versioning on the contents of the s3. Creating the first version of the infrastructure is fine. When releasing a second version, terraform replaces the zip file instead of creating a new version.

I've tried adding versioning to the s3 bucket in terraform configuration and moving the api-version to a variable string.

data "archive_file" "lambda_zip" {
  type        = "zip"
  source_file = "main.js"
  output_path = "main.zip"
}

resource "aws_s3_bucket" "lambda_bucket" {
  bucket = "s3-bucket-for-tft-project"
  versioning {
    enabled = true
  }
}
resource "aws_s3_bucket_object" "lambda_zip_file" {
  bucket = "${aws_s3_bucket.lambda_bucket.bucket}"
  key    = "v${var.api-version}-${data.archive_file.lambda_zip.output_path}"
  source = "${data.archive_file.lambda_zip.output_path}"
}

resource "aws_lambda_function" "lambda_function" {
  s3_bucket         = "${aws_s3_bucket.lambda_bucket.bucket}"
  s3_key            = "${aws_s3_bucket_object.lambda_zip_file.key}"
  function_name     = "lambda_test_with_s3_version"
  role              = "${aws_iam_role.lambda_exec.arn}"
  handler           = "main.handler"
  runtime           = "nodejs8.10"
}

I would expect the output to be another zip file but with the lambda now pointing at the new version, with the ability to change back to the old version if var.api-version was changed.

1

1 Answers

3
votes

Terraform isn't designed for creating this sort of "artifact" object where each new version should be separate from the ones before it.

The data.archive_file data source was added to Terraform in the early days of AWS Lambda when the only way to pass values from Terraform into a Lambda function was to retrieve the intended zip artifact, amend it to include additional files containing those settings, and then write that to Lambda.

Now that AWS Lambda supports environment variables, that pattern is no longer recommended. Instead, deployment artifacts should be created by some separate build process outside of Terraform and recorded somewhere that Terraform can discover them. For example, you could use SSM Parameter Store to record your current desired version and then have Terraform read that to decide which artifact to retrieve:

data "aws_ssm_parameter" "lambda_artifact" {
  name = "lambda_artifact"
}

locals {
  # Let's assume that this SSM parameter contains a JSON
  # string describing which artifact to use, like this
  # {
  #   "bucket": "s3-bucket-for-tft-project",
  #   "key": "v2.0.0/example.zip"
  # }
  lambda_artifact = jsondecode(data.aws_ssm_parameter.lambda_artifact)
}

resource "aws_lambda_function" "lambda_function" {
  s3_bucket         = local.lambda_artifact.bucket
  s3_key            = local.lambda_artifact.key
  function_name     = "lambda_test_with_s3_version"
  role              = aws_iam_role.lambda_exec.arn
  handler           = "main.handler"
  runtime           = "nodejs8.10"
}

This build/deploy separation allows for three different actions, whereas doing it all in Terraform only allows for one:

  • To release a new version, you can run your build process (in a CI system, perhaps) and have it push the resulting artifact to S3 and record it as the latest version in the SSM parameter, and then trigger a Terraform run to deploy it.
  • To change other aspects of the infrastructure without deploying a new function version, just run Terraform without changing the SSM parameter and Terraform will leave the Lambda function untouched.
  • If you find that a new release is defective, you can write the location of an older artifact into the SSM parameter and run Terraform to deploy that previous version.

A more complete description of this approach is in the Terraform guide Serverless Applications with AWS Lambda and API Gateway, which uses a Lambda web application as an example but can be applied to many other AWS Lambda use-cases too. Using SSM is just an example; any data that Terraform can retrieve using a data source can be used as an intermediary to decouple the build and deploy steps from one another.

This general idea can apply to all sorts of code build artifacts as well as Lambda zip files. For example: custom AMIs created with HashiCorp Packer, Docker images created using docker build. Separating the build process, the version selection mechanism, and the deployment process gives a degree of workflow flexibility that can support both the happy path and any exceptional paths taken during incidents.