I'm in the process of writing Packer and Terraform code to create an immutable infra on aws. However, it does not seem very straightforward to install ext4 on a disk and mount it.
The steps seem simple:
- Creating the ami with packer on t2.micro that contains all software, to be used first on test and afterwards on production.
- Launch a r3.4xlarge instance from this ami that has a 300GB ephemeral disk. Format this disk as ext4, mount it and redirect /var/lib/docker to the new filesystem for performance reasons.
- Complete the rest of the application launching.
First of all:
Is it best practice to create the ami with the same instance type you will use it for or to have one 'generic' image and start multipe instance types from that? What philosophy is the best?
- packer(software versions) -> terraform(instance + mount disk) -> deploy?
- packer(software versions) -> packer(instancetype specific mounts) -> terraform(instance) -> deploy?
- packer(software versions, instance specific mounts) -> terraform -> deploy?
The latter is starting to look better and better but requires an ami per instance type.
What I have tried so far:
According to this answer it is better to use the user_data way of working instead of the provisioners way. So I'm going down that road.
This answer seemed promising but is so old it does not work anymore. I could update it but there might be a different, better way.
This answer also seemed promising but was complaining about the ${DEVICE}. I am wondering where that variable is coming from as there are no vars specified in the template_file. If I set my own DEVICE variable to xvdb then it runs, but does not produce a result because xvdb is visible in lsblk but not in blkid.
Here is my code. The format_disks.sh file is the same as the one mentioned above. Any help is greatly appreciated.
# Create a new instance of the latest Ubuntu 16.04 on an
# t2.micro node with an AWS Tag naming it "test1"
provider "aws" {
region = "us-east-1"
}
data "template_file" "format-disks" {
template = "${file("format_disk.sh")}"
vars {
DEVICE = "xvdb"
}
}
resource "aws_instance" "test1" {
ami = "ami-98181234"
instance_type = "r3.4xlarge"
key_name = "keypair-1" # This needs to be changed so multiple users can use this
subnet_id = "subnet-a0aeb123" # maps to the vpc for the us production
associate_public_ip_address = "true"
vpc_security_group_ids = ["sg-f3e91234"] #backendservers
user_data = "${data.template_file.format-disks.rendered}"
tags {
Name = "test1"
}
ephemeral_block_device {
device_name = "xvdb"
virtual_name = "ephemeral0"
}
}
format_disk.sh
script into your AMI with packer, along with some configuration to make it run on boot, and take this out of Terraform's hands altogether. Terraform would then just arrange for the disk to be attached, and the software in the AMI would take care of getting it formatted as part of its startup. This way you can also more easily coordinate that operation with any other application startup tasks, to ensure that e.g. an app that needs the filesystem doesn't boot until after the filesystem is formatted and mounted. – Martin Atkins