1
votes

the aws_volume_attachment resource in terraform requires the instance_id ... my problem is the instance i want to mount the volume to is part of an ECS cluster and i cannot seem to find any clever examples on passing the instance ID of an instance in the ECS cluster to aws_volume_attachment so i can mount an existing EBS volume.

using the ARN does not work.

resource "aws_volume_attachment" "ebs_att" {
  device_name = "/dev/sdp"
  volume_id   = "${aws_ebs_volume.example.id}"
  instance_id = "${aws_instance.web.id}"
}

EDIT i basically boostrapped a script for the instances and my terraform looked like this:

data "template_file" "fleet-boothook" {
  template = "${file("${path.module}/boothook.tpl")}"

  vars {
    ecs_cluster_name = "${aws_ecs_cluster.this.name}"
    ebs_volume_id    = "${var.singleton_cluster_ebs_volume}"
  }
}
1
How are you creating the instances in the ECS cluster? - ydaetskcoR
i have a separate ecs-cluster module written. are you asking what my aws_launch_configuration looks like? - infracodeary

1 Answers

0
votes

I am also going through this problem, and the conclusion I am coming to accept is that there isn't an easy way to do this. This conclusion is backed by this issue and this other one. I have seen third party solutions like this one but I am not comfortable resorting to third party apps within my infrastructure.

In contrast there is this repo which claims to have achieved a way to provide nodes with EFS. I know it is not EBS, but it could provide a way to share data between containers and a master EC2 instance, regardless of which specific instance the containers are running in.

Maybe you could find a way to attach existing EC2 instances to you cluster so you have more control in provisioning them. This is what I am probably going to try, but I realise it defeats half the purpose of using ecs, which is abstracting the way you launch containers without caring about the underlying host infrastructure.


Edit:

I managed to create an ECS cluster in which all instances have an EFS partition mounted.

  • Create the EFS volume in the same availability region as you are creating your cluster [1]. Be sure to create it in a private subnet.

  • Modify your launch configuration UserData [2] script to install amazon-efs-utils [3]. If you are using ecs optimized amazon linux images, just add the following to your script:

  sudo yum install -y amazon-efs-utils
  • Add a step to create a mount point and mount the previously created EFS volume in UserData bash script:
  mkdir -p /mnt/efs
  mount -t efs fs-12345678:/ /mnt/efs

Change fs-12345678 to the id of your EFS volume.

I guess you could expose an EBS volume using something like samba, but EFS does not need a dedicated EC2 instance to be served, it receives a dns and can be mounted directly.

[1] https://docs.aws.amazon.com/efs/latest/ug/troubleshooting-efs-mounting.html#mount-fails-dns-name

[2] https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html#user-data-shell-scripts

[3] https://docs.aws.amazon.com/efs/latest/ug/using-amazon-efs-utils.html