1
votes

When using Terraform to run kubectl with local-exec provisioner under null_resource I get the following error:

exit status 1. Output: error: open /Users/myuser/.kube/config.lock: file exists

since I'm running the null_resource with count it looks like Terraforms spawns several kubectl commands in parallel, and kubectl doesn't like this. Are you familiar with a way to serialize the command in a local-exec to prevent this issue? any other ideas?

2
If you don't mind, can you share your resource configuration?Oluwafemi Sule

2 Answers

0
votes

Instead of running null_resource with count, rather build the doc or command in a var using https://www.terraform.io/docs/providers/template/index.html and execute kubectl once.

Kubectl with YAML can eat several docs at once, and if you simply specify a resource id they can be listed after kubectl with spaces.

0
votes

When using the local-exec provisioner with a null_resource, we've found that you can serialize the commands in the following manner;

  provisioner "local-exec" {
    command = <<EOT
      export KUBECONFIG=/root/.kube/config
      kubectl create -f resource1.yml
      kubectl create -f resource2.yml
      unset KUBECONFIG
    EOT
  }