2
votes

I need to provision number of VMs in Azure from a Custom Image using Terraform. The Image is rather complex, it defines a machine with 16 Data disks pre-configured to run high-performance Oracle database. My assumption was that I don't have to configure storage_data_disk blocks inside the azurerm_virtual_machine resource because all disks are already configured inside the Image.

That seemed to be true. If I create a VM using a custom storage_image_reference – all data disks were created with right LUN’s and sizes, all pre-installed software worked as expected:

resource "azurerm_virtual_machine" "database" {
  name                  = "${var.prefix}-vm"
  location              = "${azurerm_resource_group.main.location}"
  resource_group_name   = "${azurerm_resource_group.main.name}"
  network_interface_ids = ["${azurerm_network_interface.main.id}"]
  vm_size               = "Standard_E16s_v3"

  delete_os_disk_on_termination = true
  delete_data_disks_on_termination = true

  storage_image_reference {
    id = "/subscriptions/ABC/resourceGroups/XYZ/providers/Microsoft.Compute/images/CUSTOM-IMAGE"
  }

  storage_os_disk {
    name              = "${var.prefix}-os-disk"
    create_option     = "FromImage"
    managed_disk_type = "Premium_LRS"
  }

  os_profile {
    computer_name  = "hostname"
    admin_username = "testadmin"
    admin_password = "Password1234!"
  }

  os_profile_linux_config {
    disable_password_authentication = false
  }
}

The problem was that all Data disks were Standard_HDD, which is the slowest disk type, and I wanted them to be Premium_SSD as in the original VM from which the Image was taken.

Eventually, I solved that by adding explicit storage_data_disk blocks for each data disk inside the azurerm_virtual_machine resource like this:

resource "azurerm_virtual_machine" "database" {
  name                  = "${var.prefix}-vm"
  location              = "${azurerm_resource_group.main.location}"
  resource_group_name   = "${azurerm_resource_group.main.name}"
  network_interface_ids = ["${azurerm_network_interface.main.id}"]
  vm_size               = "Standard_E16s_v3"

  delete_os_disk_on_termination    = true
  delete_data_disks_on_termination = true

  storage_image_reference {
    id = "/subscriptions/ABC/resourceGroups/XYZ/providers/Microsoft.Compute/images/CUSTOM-IMAGE"
  }

  #-------------------------------------------------------------------
  #   Explicit Data Disk configuration starts here
  #-------------------------------------------------------------------

  storage_data_disk {
    name              = "home-disk"
    managed_disk_type = "Premium_LRS"
    disk_size_gb      = 100
    create_option     = "FromImage"
    lun               = 0
  }

  storage_data_disk {
    name              = "u01-disk"
    managed_disk_type = "Premium_LRS"
    disk_size_gb      = 200
    create_option     = "FromImage"
    lun               = 1
  }

  storage_data_disk {
    name              = "backup-disk-0"
    managed_disk_type = "Premium_LRS"
    disk_size_gb      = 1023
    create_option     = "FromImage"
    lun               = 2
  }

  #-------------------------------------------------------------------
  #   Skipped 12 disks ...
  #-------------------------------------------------------------------

  storage_data_disk {
    name              = "data-disk-9"
    managed_disk_type = "Premium_LRS"
    disk_size_gb      = 512
    create_option     = "FromImage"
    lun               = 15
  }

  #-------------------------------------------------------------------
  #   Explicit Data Disk configuration ends here
  #-------------------------------------------------------------------

  storage_os_disk {
    name              = "${var.prefix}-os-disk"
    create_option     = "FromImage"
    managed_disk_type = "Premium_LRS"
  }

  os_profile {
    computer_name  = "hostname"
    admin_username = "testadmin"
    admin_password = "Password1234!"
  }

  os_profile_linux_config {
    disable_password_authentication = false
  }
}

That worked, and all data disks were created as Premium_SSD now, but such solution feels wrong because it must be kept in perfect sync with the source Image at all times. If the team that prepares that Image decides to add/remove a disk or change the size of one of them – that change must be reflected in my TF template.

Interestingly, when a new VM is created from the same Image using Azure Portal UI, all the data disks were created as Premium_SSD. I didn’t have to configure them and didn’t even knew upfront how many data disks were defined in the Image or their sizes. But when I use Terraform – all data disks created as Standard_HDD.

Is there a way to tell Terraform/Azure provider which Disk type to use for VM’s provisioned from Custom Images without explicitly configuring each of them?

Thank you!

1

1 Answers

1
votes

Unfortunately, the way to decide the disk type when creating the Azure VM through Terraform is just the one that you refer to. As I know, it's the only way in Terraform.

To decide the disk type for all the disk in the VM, there should be a parameter to select. In Azure Portal, you can choose Premium SSD for the OS disk type, then the VM will create all the OS disk and data disk in the Premium SSD.

Also, when you create the VM from the custom image through Azure CLI, there is a parameter for you to set the disk type: --storage-sku. You can input the value Premium_LRS to create all the disk in the Premium SSD. See az vm create.

But Unfortunately, you cannot find the property in Terraform for the virtual machine currently. Maybe you can add the issue in Terraform community to improve it.