0
votes

We are trying to create namespaces through terraform in our AKS cluster. However, it keeps failing with errors. Using Service Principal for az login

configuration -

resource "null_resource" "aks_login" {
  triggers = {
    always_run = "${timestamp()}"
  }
  provisioner "local-exec" {
    command = "az aks get-credentials -g rg-my-cluster -n my-cluster --admin --overwrite-existing"
  }
}

resource "kubernetes_namespace" "test0" {
  metadata {
    name = "ns-test0"
  }
  depends_on = [null_resource.aks_login]
}

resource "kubernetes_namespace" "test1" {
  for_each = toset( var.testVal )
  metadata {
    labels = {
      istio-injection = "enabled"
    }
    name = "ns-test1-${each.key}"
  }
  depends_on = [null_resource.aks_login]
}

The error is as:

module.namespaces.null_resource.aks_login (local-exec): Executing: ["/bin/sh" "-c" "az aks get-credentials -g rg-my-cluster -n my-cluster --admin --overwrite-existing"]
module.namespaces.null_resource.aks_login (local-exec): Merged "my-cluster-admin" as current context in /home/hpad/.kube/config
module.namespaces.null_resource.aks_login: Creation complete after 1s [id=1979085082878134694]
module.namespaces.kubernetes_namespace.test0: Creating...
module.namespaces.kubernetes_namespace.test1["t1"]: Creating...


Error: Post "http://localhost/api/v1/namespaces": dial tcp [::1]:80: connect: connection refused

Error: Post "http://localhost/api/v1/namespaces": dial tcp [::1]:80: connect: connection refused

It is still considering a local k8s cluster for creation even though login to AKS is happening. Am I missing something here or it might be due to a bug in providers.

kubernetes provider configuration is empty currently. However, we don't have kubeconfig file so can't use that. Also, using host, client_certificate, client_key and cluster_ca_certificate is throwing Unauthorized error (because it should, access is blocked through that)

Taken some references from -

How to always run local-exec with Terraform

How to automatically authenticate against the kubernetes cluster after creating it with terraform in Azure?

Create Resource Dependencies

1

1 Answers

1
votes

In most cases it's not practical to both instantiate a new service and make use of it with a separate provider in the same Terraform configuration, because this break's Terraform's model of creating a full plan before taking any action. The failure in this case is likely because Terraform is trying to plan the creation of kubernetes_namespace.test0 at the same time as planning the creation of the Kubernetes cluster itself, and that isn't possible because the cluster hasn't been created yet.

The most robust way to get this done, then, is to split this Terraform configuration into two separate parts: first, a configuration that uses the azurerm provider to create the cluster, and then separately a configuration which has a properly-configured provider "kubernetes" block which refers to the cluster that the first configuration created.

Your example also shows an attempt to use a local-exec provisioner to acquire credentials. Again, that isn't possible because provisioners happen during the apply step rather than the plan step, and so there would be no configuration available during planning.

The typical way to use Terraform is to perform any necessary login steps separately from Terraform before running it, such as running az login and az aks get-credentials at your shell prompt beforehand. Then the appropriate Terraform providers should automatically detect the credentials and possibly other settings the same way as other tools for the target platform conventionally do, whether by environment variables or credentials files depending on the usual conventions for that service.