2
votes

I want to combine Ansible with Terraform so that Terraform creates the machines and Ansible will provision them. Using terraform-provisioner-ansible it's possible to bring them seamlessly together. But I saw a lack of change detection, which doesn't happen when Ansible runs standalone.

TL;DR: How can I apply changes made in Ansible to the Terraform Ansible plugin? Or at least execute the ansible plugin on every update so that Ansible can handle this itself?

Example use case

Consider this playbook which installs some packages

- name: Ansible install package test
  hosts: all
  tasks: 
  - name: Install cli tools
    become: yes
    apt:
      name: "{{ tools }}"
      update_cache: yes
    vars:
      tools:
        - nnn
        - htop

which is integrated into Terraform using the plugin

resource "libvirt_domain" "ubuntu18" {
  # ...
  connection {
    type = "ssh"
    host = "192.168.2.2"
    user = "ubuntu"
    private_key = "${file("~/.ssh/id_rsa")}"
  }
  provisioner "ansible" {
    plays {
      enabled = true
      become_method = "sudo"

      playbook = {
        file_path = "ansible-test.yml" 
      }
    }
  }
}

will fork fine on the first run. But later I notice some package was missing

- name: Ansible install package test
  hosts: all
  tasks: 
  - name: Install cli tools
    become: yes
    apt:
      name: "{{ tools }}"
      update_cache: yes
    vars:
      tools:
        - nnn
        - htop
        - vim # This is a new package

When running terraform plan I'll get No changes. Infrastructure is up-to-date. My new package vim will never got installed! So Ansible didn't run because if Ansible runs, it would install the new package.

The problem seems to be the provisioner itself:

Creation-time provisioners are only run during creation, not during updating or any other lifecycle. They are meant as a means to perform bootstrapping of a system.

But what is the correct way of applying updates? I tried a null_ressource with depends_on link to my vm ressource, but Terraform doesn't detect changes on the Ansible part, too. Seems to be a lack of change detection from the Terraform plugin.

In the doc I only found destroy time provisioners. But none for updates. I could destroy and re-create the machine. This would slow down things a lot. I like the Ansible aproach of checking what is presend and only apply changes which aren't already present, this seems a good way of provisioning.

Isn't it possible to do something similar with Terraform?

With my current experience (more Ansible than Terraform), I don't see any other way as dropping the nice plugin and execute Ansible on my own. But this would also drop the nice integration. So I need to generate inventory files on my own or even by hand (which misses the automation approach in my point of view).

source_code_hash may be an option but is inflexible: When having multiple plays/roles, I need to do this by hand for every single file which keeps error-prone easily.

3

3 Answers

2
votes

Use a null_ressource with pseudo trigger

The idea from tedsmitt uses a timestamp as trigger, which seems the only way to force a provisioner. Howver running ansible-playbook plain from the CLI would create overhead of maintaining the inventory by hand. You can't call the python dynamic inventory script from here since terraform apply need to complete before

In my point of view, a better approach would be running the ansible provisioner here:

resource "null_resource" "ansible-provisioner" {
  triggers {
      build_number = "${timestamp()}"
  }
  depends_on = ["libvirt_domain.ubuntu18"]

  connection {
    type = "ssh"
    host = "192.168.2.2"
    user = "ubuntu"
    private_key = "${file("~/.ssh/id_rsa")}"
  }
  provisioner "ansible" {
    plays {
      enabled = true
      become_method = "sudo"

      playbook = {
        file_path = "ansible-test.yml" 
      }
    }
  }
}

Only drawbag here is: Terraform will recognize a pseudo change everytime

Terraform will perform the following actions:

-/+ null_resource.ansible-provisioner (new resource required)
      id:                    "3365240528326363062" => <computed> (forces new resource)
      triggers.%:            "1" => "1"
      triggers.build_number: "2019-06-04T09:32:27Z" => "2019-06-04T09:34:17Z" (forces new resource)


Plan: 1 to add, 0 to change, 1 to destroy.

This seems the best compromise to me, according to other workarounds avaliable.

Run Ansible manually with dynamic inventory

Another way I found is the dynamic inventory plugin, detailled description can be found in this blog entry. It integrates into Terraform and let you specify ressources as inventory host, some example:

resource "ansible_host" "k8s" {
  inventory_hostname = "192.168.2.2"
  groups             = ["test"]
  vars = {
    ansible_user = "ubuntu"
    ansible_ssh_private_key_file = "~/.ssh/id_rsa"
  }
}

The Python script use this information to generate a dynamic inventory, which can be used like this:

ansible-playbook -i /etc/ansible/terraform.py ansible-test.yml

A big benefit is: It keeps your configuration DRY. Terraform has the leading configuration file, no need to also maintain separate Ansible files. And also the ability for variable usage (e.g. the inventory hostname shouldn't be hardcoded for production usage as in my example).

In my use case (Provision Rancher testcluster) the null_ressource approach seems better since EVERYTHING is build with a single Terraform command. No need to additionally executing Ansible. But depending on the requirements, it can be better to keep Ansible a seperate step, so I posted this as alternative.

Installing the plugin

When trying this solution, remember that you need to install the corresponding Terraform plugin from here:

version=0.0.4
wget https://github.com/nbering/terraform-provider-ansible/releases/download/v${version}/terraform-provider-ansible-linux_amd64.zip -O terraform-provisioner-ansible.zip
unzip terraform-provisioner-ansible.zip
chmod +x linux_amd64/*
mv linux_amd64 ~/.terraform.d/plugins

And also notice, that the automated provisioner from the solution above needs to be removed first, since it has the same name (may conflict).

0
votes

As you mentioned in your question, there is no change detection in the plugin. You could implement a trigger on a null_resource so that it runs on every apply.

resource "null_resource" "ansible-provisioner" {
  triggers {
      build_number = "${timestamp()}"
  }
  provisioner "local-exec" {
    command = "ansible-playbook ansible-test.yml"
  }
}
0
votes

You can try this, It works for me.

resource "null_resource" "ansible-swarm-setup" {
  local_file.ansible_inventory ]
  #nhu
  triggers= {
      instance_ids = join(",",openstack_compute_instance_v2.swarm-cluster-hosts[*].id)
  }
  connection {
    type = "ssh"
    user = var.ansible_user
    timeout = "3m"
    private_key = var.private_ssh_key
    host = local.cluster_ips[0]
  }
}

When it detects the changes in instance index/ids then it will triger ansible playbook.