1
votes

I am trying to use the Terraform File Provisioner to upload a directory to an Azure VM using WinRM. Getting various errors and timeouts. The Win2009 server VM deploys just fine and after it is deployed, I can use WinRM to do a Powershell remoting session to the system. But when I add the File Provisioner (shown below) I get one of the following errors:

Error: timeout - last error: http response error: 401 - invalid content type

Or this error, depending on switching to https true/false or insecure true/false:

Error: timeout - last error: unknown error Post https://52.176.165.48:5985/wsman: http: server gave HTTP response to HTTPS client

Is there a better way to upload a directory and execute a PowerShell post-deployment script after the VM instantiates?

Here is my *.tf file:

locals {
  virtual_machine_name = "${var.prefix}-dc1"
  virtual_machine_fqdn = "${local.virtual_machine_name}.${var.active_directory_domain}"
  custom_data_params   = "Param($RemoteHostName = \"${local.virtual_machine_fqdn}\", $ComputerName = \"${local.virtual_machine_name}\")"
  custom_data_content  = "${local.custom_data_params} ${file("${path.module}/files/winrm.ps1")}"
}
resource "azurerm_availability_set" "dcavailabilityset" {
  name                         = "dcavailabilityset"
  resource_group_name          = "${var.resource_group_name}"
  location                     = "${var.location}"
  platform_fault_domain_count  = 3
  platform_update_domain_count = 5
  managed                      = true
}

resource "azurerm_virtual_machine" "domain-controller" {
  name                          = "${local.virtual_machine_name}"
  location                      = "${var.location}"
  resource_group_name           = "${var.resource_group_name}"
  availability_set_id           = "${azurerm_availability_set.dcavailabilityset.id}"
  network_interface_ids         = ["${azurerm_network_interface.primary.id}"]
  vm_size                       = "Standard_A1"
  delete_os_disk_on_termination = false

  storage_image_reference {
    publisher = "MicrosoftWindowsServer"
    offer     = "WindowsServer"
    sku       = "2019-Datacenter"
    version   = "latest"
  }

  storage_os_disk {
    name              = "${local.virtual_machine_name}-disk1"
    caching           = "ReadWrite"
    create_option     = "FromImage"
    managed_disk_type = "Standard_LRS"
  }

  os_profile {
    computer_name  = "${local.virtual_machine_name}"
    admin_username = "${var.admin_username}"
    admin_password = "${var.admin_password}"
    custom_data    = "${local.custom_data_content}"
  }

  os_profile_windows_config {
    provision_vm_agent        = true
    enable_automatic_upgrades = false

    additional_unattend_config {
      pass         = "oobeSystem"
      component    = "Microsoft-Windows-Shell-Setup"
      setting_name = "AutoLogon"
      content      = "<AutoLogon><Password><Value>${var.admin_password}</Value></Password><Enabled>true</Enabled><LogonCount>1</LogonCount><Username>${var.admin_username}</Username></AutoLogon>"
    }

    # Unattend config is to enable basic auth in WinRM, required for the provisioner stage.
    additional_unattend_config {
      pass         = "oobeSystem"
      component    = "Microsoft-Windows-Shell-Setup"
      setting_name = "FirstLogonCommands"
      content      = "${file("${path.module}/files/FirstLogonCommands.xml")}"
    }
  }

  provisioner "file" {
    source      = "BadBlood"
    destination = "C:/BadBlood"
    connection {
      host     = "${azurerm_public_ip.dc1-external.ip_address}"
      type     = "winrm"
      user     = "${var.admin_username}"
      password = "${var.admin_password}"
      timeout  = "15m"
      https    = false
      port     = "5985"
      insecure = true
    }

  }

}
1
Are you sure the WinRM is really enabled? If yes, you also need to create NSG rules to allow the port 5985 for inbound traffic while there is an NSG associated with it.Charles Xu
Yes, I ended up resolving by adding a windows firewall rule to allow port 5986 for https-winrm, and changed the provisioner to use port of 5986 and https set to true.Jason
OK, if you solve it yourself please add an answer to show it or just delete it.Charles Xu

1 Answers

1
votes

Here was the resolution. There was already a winrm.ps1 script that was being used in the Azure VM extension to do auto provisioning. I had to add an entry to have port 5986 listen as it was already configured to listen on https for WinRM:

Write-Host "Enable HTTPS in WinRM"
$WinRmHttps = "@{Hostname=`"$RemoteHostName`"; CertificateThumbprint=`"$Thumbprint`"}"
winrm create winrm/config/Listener?Address=*+Transport=HTTPS $WinRmHttps

Write-Host "Set Basic Auth in WinRM"
$WinRmBasic = "@{Basic=`"true`"}"
winrm set winrm/config/service/Auth $WinRmBasicWrite-Host "Open Firewall Ports"
netsh advfirewall firewall add ruleenter code here name="Windows Remote Management (HTTP-In)" dir=in action=allow protocol=TCP localport=5985

netsh advfirewall firewall add rule name="Windows Remote Management (HTTPS-In)" dir=in action=allow protocol=TCP localport=5986

I had to do a lot of packet debugging with Wireshark and netcat to figure this out and test the Azure VM from the outside. There were no NSG rules configured as this is just a test lab Azure VM system.

Last, had to configure the file provisioner to correctly upload with https set to true and port 5986:

  provisioner "file" {
    source      = "${path.module}/files/badblood.zip"
    destination = "C:/terraform/badblood.zip"
    connection {
      host     = "${azurerm_public_ip.dc1-external.ip_address}"
      type     = "winrm"
      user     = "${var.admin_username}"
      password = "${var.admin_password}"
      timeout  = "15m"
      https    = true
      port     = "5986"
      insecure = true
    }