0
votes

In our case we are doing the following:
1. Infra Agent
a. We create a KV
b. We create a SQL Database in the tf script, including assigning an admin username and password (randomly generated value).
c. We store the username and password as secrets in the newly created KV
2. Data Agent
a. We want to deploy the DDL from the repos onto the SQL Database we created in Infra Agent. We need to use the SQL database username and password stored in the KV to do so
b. In order to read the secrets from the KV our current thinking is to insert the username and password to pipeline parameters in step 1 (i.e. setting them at runtime) so we can reuse the values across other Agents.

A couple of questions:
- Is that the right approach? Should KV be created in the Infra Agent tf scripts? Should we randomly generate passwords (as secrets)?
- What is best practice to access the Database username and password in other Agents, given that:
o We can’t use variable groups because the KV and values won’t be known until runtime
o We can’t use the Key Vault Task (https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/azure-key-vault?view=azure-devops) to read from the KV because the KV name is only known at runtime (via the tf vars file)

1

1 Answers

1
votes

b. We create a SQL Database in the tf script, including assigning an admin username and password (randomly generated value).

If you're using Key Vault, then I assume you're talking about Azure SQL Databases. However at the moment Terraform only supports assigning a administrator username and password for the SQL Server instance, not SQL databases.

In this case, I recommend using random_password resources to assign values to azurerm_key_vault_secret which can then be assigned as the azurerm_sql_server administrator password.

With this setup you know for certain that the password in Key Vault is always in sync, and can be treated as the source of truth for your SQL server passwords (unless someone goes and resets the administrator password manually of course).

Now if you ever want to reset an SQL server password, simply taint the random_password, forcing it to be recreated with a new value, which in turn updates the azurerm_key_vault_secret value and then the azurerm_sql_server password.

Here's some quick HCL as an example

resource "random_password" "password" {
    length = 16
    special = false
}

resource "azurerm_key_vault_secret" "password_secret" {
    depends_on = [<the Key Vault access policy for your infra agent which runs terraform apply>]
    ...
    value = random_password.password.result
    ...
}

resource "azurerm_sql_server" "sql_server" {
    ...
    administrator_login_password = azurerm_key_vault_secret.password_secret.value
    ...
}

Is that the right approach? Should KV be created in the Infra Agent tf scripts? Should we randomly generate passwords (as secrets)?

This is a sensible approach, but remember that billing is per secret, key or cert and Key Vaults themselves are free. It's recommended to create a Key Vault for each application because access policies can only be applied per Key Vault and not per secret/key/cert.

We can’t use the Key Vault Task (https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/azure-key-vault?view=azure-devops) to read from the KV because the KV name is only known at runtime (via the tf vars file)

Why is this only known at runtime? This sounds like a limitation of your own process since Terraform allows you to specify a name for each Key Vault when you create it. Reconsider if this is really a requirement and why you are doign this. If it definitely is a requirement and your Key Vault names are dynamically generated, then you can use terraform output to get the Key Vault name during the pipeline and set it as a variable during the build.

To fetch the Key Vault name as an output just use the following HCL

output "key_vault_name" {
    value = "${azurerm_key_vault.demo_key_vault.name}"
}

and run `terraform output key_vault_name" to write the value to stdout.