1
votes

I am new at Terraform and would like some help. I have successfully created a VM and can manually SSH into it with no problem. The issue is I am working with a team on a project and they can't do any changes to the Tf files without making Terraform delete all the resources and recreating them. I think this is because they have a different SSH Key from mines.

admin_ssh_key {
   username = "azureroot"
   public_key = file("~/.ssh/id_rsa.pub")}

Because the contents of my ssh key is different from my teammates, it will destroy the VM and recreate it using the key from the person who did the terraform apply. Is there any way to get around this? This has caused many issues because we have had multiple vms destroyed because of the keys were different.

4
There are a few ways around this. If both you are using the same user, you could combine both the public keys together. Not a good idea, but works. Another one would be via cloud-init wherein you could add the user and keys but Azure waagent needs to be told and you may have problems while snapshotting the instance because of stale user data. I don't understand why both the teams are creating the same resource..harshavmb
The teams are not creating same resources. An example is if I am the first person to do the terraform apply, my key would be associated with the VM. But then later on, my teammate needs to make a modification to an Inbound/Outbound rule, and does a terraform plan, it will delete the vm and spin a new one up, even though we made no changes to it. When I look at the output it’s because terraform detected a different Key (because my key is different from my teams)python-noob
okay. In that case, club both the keys together. public_key - (Required) The Public Key which should be used for authentication, which needs to be at least 2048-bit and in ssh-rsa format. Changing this forces a new resource to be created. Refer this:: terraform.io/docs/providers/azurerm/r/…harshavmb
Any updates for the question? Does it solve your problem?Charles Xu
Yes your answer below helped! Thanks everyone for chiming in.python-noob

4 Answers

2
votes

The problem is due to the configuration of the VM. It seems like you use the resource azurerm_linux_virtual_machine and set the SSH key as:

admin_username      = "azureroot"
admin_ssh_key {
   username = "azureroot"
   public_key = file("~/.ssh/id_rsa.pub")
}

For the public key, you use the function file() to load the public key from your current machine with the path ~/.ssh/id_rsa.pub. So when you are in a different machine, maybe your teammate's, then the public key should be different from yours. And it makes the problem.

Here I have two suggestions for you. One is that use the static public key like this:

admin_username      = "azureroot"
admin_ssh_key {
   username = "azureroot"
   public_key = "xxxxxxxxx"
}

Then no matter where you execute the Terraform code, the public key will not cause the problem. And you can change the things as you want, for example, the NSG rules.

1
votes

you can simply add everyone's public key to the same file which is used to create the root ssh key. this is the pragmatic approach but should not be promoted as a standard. for best practices you should be adding each user as an individual so that their user and the public key is created by our provisioning and they then login as their own user and escalate privilege as required

1
votes

Terraform is not a good use case for your problem. A system user for bootstrap or updates should have the key locked away. For traceability I would lock the private key only for use during a break-glass scenario.

If you want everyone to use a different key but the same user, I would suggest looking at a configuration management tool such as puppet, chef, ansible, salt, the list goes on. Otherwise you should share the key.

https://serverfault.com/questions/471753/what-are-best-practices-for-managing-ssh-keys-in-a-team/471799

0
votes

Maybe this will help someone who have the same issue with me.

You can generate new private key and public key using terraform configuration language. Here is the following example:

resource "tls_private_key" "example_ssh" {
    algorithm = "RSA"
    rsa_bits = 4096
}

resource "azurerm_linux_virtual_machine" "myterraformvm" {
    computer_name = "myvm"
    admin_username = "azureuser"
    disable_password_authentication = true

    admin_ssh_key {
        username = "azureuser"
        public_key = tls_private_key.example_ssh.public_key_openssh #The magic here
    }

    tags = {
        environment = "Terraform Demo"
    }
}