I am also going through this problem, and the conclusion I am coming to accept is that there isn't an easy way to do this. This conclusion is backed by this issue and this other one. I have seen third party solutions like this one but I am not comfortable resorting to third party apps within my infrastructure.
In contrast there is this repo which claims to have achieved a way to provide nodes with EFS. I know it is not EBS, but it could provide a way to share data between containers and a master EC2 instance, regardless of which specific instance the containers are running in.
Maybe you could find a way to attach existing EC2 instances to you cluster so you have more control in provisioning them. This is what I am probably going to try, but I realise it defeats half the purpose of using ecs, which is abstracting the way you launch containers without caring about the underlying host infrastructure.
Edit:
I managed to create an ECS cluster in which all instances have an EFS partition mounted.
Create the EFS volume in the same availability region as you are creating your cluster [1]. Be sure to create it in a private subnet.
Modify your launch configuration UserData [2] script to install amazon-efs-utils [3]. If you are using ecs optimized amazon linux images, just add the following to your script:
sudo yum install -y amazon-efs-utils
- Add a step to create a mount point and mount the previously created EFS volume in
UserData bash script:
mkdir -p /mnt/efs
mount -t efs fs-12345678:/ /mnt/efs
Change fs-12345678 to the id of your EFS volume.
I guess you could expose an EBS volume using something like samba, but EFS does not need a dedicated EC2 instance to be served, it receives a dns and can be mounted directly.
[1] https://docs.aws.amazon.com/efs/latest/ug/troubleshooting-efs-mounting.html#mount-fails-dns-name
[2] https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html#user-data-shell-scripts
[3] https://docs.aws.amazon.com/efs/latest/ug/using-amazon-efs-utils.html