We are trying to create namespaces
through terraform
in our AKS cluster. However, it keeps failing with errors. Using Service Principal
for az login
configuration -
resource "null_resource" "aks_login" {
triggers = {
always_run = "${timestamp()}"
}
provisioner "local-exec" {
command = "az aks get-credentials -g rg-my-cluster -n my-cluster --admin --overwrite-existing"
}
}
resource "kubernetes_namespace" "test0" {
metadata {
name = "ns-test0"
}
depends_on = [null_resource.aks_login]
}
resource "kubernetes_namespace" "test1" {
for_each = toset( var.testVal )
metadata {
labels = {
istio-injection = "enabled"
}
name = "ns-test1-${each.key}"
}
depends_on = [null_resource.aks_login]
}
The error is as:
module.namespaces.null_resource.aks_login (local-exec): Executing: ["/bin/sh" "-c" "az aks get-credentials -g rg-my-cluster -n my-cluster --admin --overwrite-existing"]
module.namespaces.null_resource.aks_login (local-exec): Merged "my-cluster-admin" as current context in /home/hpad/.kube/config
module.namespaces.null_resource.aks_login: Creation complete after 1s [id=1979085082878134694]
module.namespaces.kubernetes_namespace.test0: Creating...
module.namespaces.kubernetes_namespace.test1["t1"]: Creating...
Error: Post "http://localhost/api/v1/namespaces": dial tcp [::1]:80: connect: connection refused
Error: Post "http://localhost/api/v1/namespaces": dial tcp [::1]:80: connect: connection refused
It is still considering a local k8s cluster for creation even though login to AKS is happening. Am I missing something here or it might be due to a bug in providers.
kubernetes
provider configuration is empty currently. However, we don't have kubeconfig
file so can't use that. Also, using host
, client_certificate
, client_key
and cluster_ca_certificate
is throwing Unauthorized
error (because it should, access is blocked through that)
Taken some references from -