0
votes

Currently I have a module that is used as a template to create a lots of EC2 in AWS. So using this template with volume_tags, I should expect that for all the EBS volumes created along with the EC2 would got the same tags.

However, the issue is that after I created the EC2 using this Terraform script, in some occasion I'll need to mount a few more EBS volume to the EC2, and those volumes will got different tags (e.g. the Name tag is volume_123).

After mounting the volume to the EC2 in AWS web console, I try to run terraform init and terraform plan again, and it tells me that there are changes need to apply, as the volume_tags of the EC2 created appear 'replaced' the original Name volume_tags. Example output would be this:

#module.ec2_2.aws_instance.ec2 will be updated in-place
~ resource "aws_instance" "ec2" {
     id = "i-99999999999999999"
   ~ volume_tags = {
      ~ "Name" = "volume_123" -> "ec22"
     }
  }

When I was reading the documentation of Terraform provider aws, I understand that the volume_tags should only apply when the instance is created. However, it seems that even after creation it will still try to align the tags of EBS volume attached to the EC2. As I need to keep those newly attached volume with a different set of tags then the root and EBS volume attached when the EC2 is created (different AMI has different number of block devices), is that I should avoid using volume_tags to give the volumes tag at creation? And if not using it what should I do instead?

The following are the codes:

terraform_folder/modules/ec2_template/main.tf

resource "aws_instance" "ec2" {
   ami = var.ami
   availability_zone = var.availability_zone
   instance_type = var.instance_type

   tags = merge(map("Name", var.name), var.tags)

   volume_tags = merge(map("Name", var.name), var.tags)
}

terraform_folder/deployment/machines.tf

module "ec2_1" {
  source = "../modules/ec2_template"

  name = "ec21"

  ami = local.ec2_ami_1["a"]
  instance_type = local.ec2_instance_type["app"]

  tags = merge(
    map(
      "Role", "app",
    ),
    local.tags_default
  )
}

module "ec2_2" {
  source = "../modules/ec2_template"

  name = "ec22"

  ami = local.ec2_ami_2["b"]
  instance_type = local.ec2_instance_type["app"]

  tags = merge(
    map(
      "Role", "app",
    ),
    local.tags_default
  )
}

module "ec2_3" {
  source = "../modules/ec2_template"

  name = "ec23"

  ami = local.ec2_ami_1["a"]
  instance_type = local.ec2_instance_type["app"]

  tags = merge(
    map(
      "Role", "app",
    ),
    local.tags_default
  )
}

terraform_folder/deployment/locals.tf

locals {
   ec2_ami_1 = {
      a = "ami-11111111111111111"
      b = "ami-22222222222222222"
   }

   ec2_ami_2 = {
      a = "ami-33333333333333333"
      b = "ami-44444444444444444"
   }

   ec2_ami_3 = {
      a = "ami-55555555555555555"
      b = "ami-66666666666666666"
   }

   tags_default = {
      Terraform = "true"
      Environment = "test"
      Application = "app"
      BackupFrequency = "2"
  }
}
1
How did it go? Do the issue still persist? Did you check with lifecycle?Marcin

1 Answers

0
votes

You shouldn't be modifying resources managed by TF manually using AWS Console. This lead to resource drift and issues you are experiencing.

Nevertheless, you can use lifecycle Meta-Argument to tell TF to ignore changes to your tags:

resource "aws_instance" "ec2" {
   ami = var.ami
   availability_zone = var.availability_zone
   instance_type = var.instance_type

   tags = merge(map("Name", var.name), var.tags)

   volume_tags = merge(map("Name", var.name), var.tags)

  lifecycle {
    ignore_changes = [volume_tags]
  }
}