0
votes

Before Terraform supported the Storage Gateway in AWS, I created three file gateways through other means. Essentially, I used Terraform to launch the bits that were supported (iam policy, s3 bucket, ec2 instance, cache volume), and used a bash script making cli calls to pull it all together. It worked great.

Now that Terraform supports the creation/activation of a file gateway (including provisioning of the cache volume), I've refactored my Terraform to eliminate the bash scripts.

The gateway instance and cache volume were created using the following Terraform:

resource "aws_instance" "gateway" {
  ami           = "${var.instance_ami}"
  instance_type = "${var.instance_type}"

  # Refer to AWS File Gateway documentation for minimum system requirements.
  ebs_optimized = true
  subnet_id     = "${element(data.aws_subnet_ids.subnets.ids, random_integer.priority.result)}"

  ebs_block_device {
    device_name           = "/dev/xvdf"
    volume_size           = "${var.ebs_cache_volume_size}"
    volume_type           = "gp2"
    delete_on_termination = true
  }

  key_name = "${var.key_name}"

  vpc_security_group_ids = [
    "${aws_security_group.storage_gateway.id}",
  ]
}

Once the instance is up and running, the following snippet from a bash script looks up the volume ID and configures the volume as the gateway cache:

# gets the gateway_arn and uses that to lookup the volume ID
gateway_arn=$(aws storagegateway list-gateways --query "Gateways[*].{arn:GatewayARN,name:GatewayName}" --output text | grep ${gateway_name} | awk '{print $1}')
volume_id=$(aws storagegateway list-local-disks --gateway-arn ${gateway_arn} --query "Disks[*].{id:DiskId}" --output text)
echo "the volume ID is $volume_id"

# add the gateway cache
echo "adding cache to the gateway"
aws storagegateway add-cache --gateway-arn ${gateway_arn} --disk-id ${volume_id}

The end result of this process is that the gateway is online, the cache volume is configured, but the Terraform state is only aware of the instance. I subsequently refactored the Terraform to include the following:

resource "aws_storagegateway_gateway" "nfs_file_gateway" {
  gateway_ip_address = "${aws_instance.gateway.private_ip}"
  gateway_name       = "${var.gateway_name}"
  gateway_timezone   = "${var.gateway_time_zone}"
  gateway_type       = "FILE_S3"
}

resource "aws_storagegateway_cache" "nfs_cache_volume" {
  disk_id     = "${aws_instance.gateway.ebs_block_device.volume_id}"
  gateway_arn = "${aws_storagegateway_gateway.nfs_file_gateway.id}"
}

From there, I ran the following to get the disk_id of the cache volume (note I have redacted the account ID and gateway ID:

aws storagegateway list-local-disks --gateway-arn arn:aws:storagegateway:us-east-1:[account_id]:gateway/[gateway_id] --region us-east-1

This returns:

{
    "GatewayARN": "arn:aws:storagegateway:us-east-1:[account_id]:gateway/[gateway_id]",
    "Disks": [
        {
            "DiskId": "xen-vbd-51792",
            "DiskPath": "/dev/xvdf",
            "DiskNode": "/dev/sdf",
            "DiskStatus": "present",
            "DiskSizeInBytes": 161061273600,
            "DiskAllocationType": "CACHE STORAGE"
        }
    ]
}

I then ran a Terraform import command on the aws_storagegateway_cache resource to pull the existing resource into the state file.

Command I ran:

terraform_11.5 import module.sql_backup_file_gateway.module.storage_gateway.aws_storagegateway_cache.nfs_cache_volume arn:aws:storagegateway:us-east-1:[account_id]:gateway/[gateway_id]:xen-vbd-51792

The import completes successfully. I then run a Terraform init and a Terraform plan, which shows that if I were to run an apply, the cache volume would be recreated.

Output from the plan:

-/+ module.sql_backup_file_gateway.module.storage_gateway.aws_storagegateway_cache.nfs_cache_volume (new resource required)
      id:                     "arn:aws:storagegateway:us-east-1:[account_id]:gateway/[gateway_id]:xen-vbd-51792" => <computed> (forces new resource)
      disk_id:                "xen-vbd-51792" => "1" (forces new resource)
      gateway_arn:            "arn:aws:storagegateway:us-east-1:[account_id]:gateway/[gateway_id]" => "arn:aws:storagegateway:us-east-1:[account_id]:gateway/[gateway_id]"

There are no other values for the disk_id that I can provide in the import statement that allow the import to complete. I'm not sure what I can change to avoid the cache volume from being recreated if a sugsequent terraform apply is run.

1
What does your Terraform code look like? If you can post the state file as well (or at least the relevant section) that would also be useful. - ydaetskcoR
Welcome to Stack Overflow! Other users marked your question for low quality and need for improvement. I re-worded/formatted your input to make it easier to read/understand. Please review my changes to ensure they reflect your intentions. But I think your question is still not answerable. You should edit your question now, to add missing details (see minimal reproducible example ). Feel free to drop me a comment in case you have further questions or feedback for me. - GhostCat
@ydaetskcoR - I have updated the original ask to include much more detail - Dave Stauffacher
There is a devops.stackexchange.com now for this kind of questions - JleruOHeP
Thanks for the update. Can you also show the output of terraform state show aws_instance.gateway please? - ydaetskcoR

1 Answers

2
votes

I've actually found the solution. @ydaetskcoR - your comment regarding mapping the volume_id to the disk_id led me to find the Terraform I needed to bridge the gap between the instance declaration and the cache declaration.

This Terraform block allows me to look up the ebs_block_device in a way that can output the correct disk_id later in the Terraform:

data "aws_storagegateway_local_disk" "cache" {
  disk_path   = "/dev/xvdf"
  gateway_arn = "${aws_storagegateway_gateway.nfs_file_gateway.arn}"
}

Once I added this block, I then refactored the Terraform that configures the cache to the following:

resource "aws_storagegateway_cache" "nfs_cache_volume" {
  disk_id     = "${data.aws_storagegateway_local_disk.cache.id}"
  gateway_arn = "${aws_storagegateway_gateway.nfs_file_gateway.id}"
}

Now when I run the terraform init and terraform plan, the gateway volume does not show up as needing any changes or replacement.

Thanks for helping me track this down.

-Dave