In my terraform config files I create a Kubernetes cluster on GKE and when created, set up a Kubernetes provider to access said cluster and perform various actions such as setting up namespaces.
The problem is that some new namespaces were created in the cluster without terraform and now my attempts to import these namespaces into my state seem fail due to inability to connect to the cluster, which I believe is due to the following (taken from Terraform's official documentation of the import command):
The only limitation Terraform has when reading the configuration files is that the import provider configurations must not depend on non-variable inputs. For example, a provider configuration cannot depend on a data source.
The command I used to import the namespaces is pretty straightforward:
terraform import kubernetes_namespace.my_new_namespace my_new_namespace
I also tried using the -provdier=""
and -config=""
but to no avail.
My Kubernetes provider configuration is this:
provider "kubernetes" {
version = "~> 1.8"
host = module.gke.endpoint
token = data.google_client_config.current.access_token
cluster_ca_certificate = base64decode(module.gke.cluster_ca_certificate)
}
An example for a namespace resource I am trying to import is this:
resource "kubernetes_namespace" "my_new_namespace" {
metadata {
name = "my_new_namespace"
}
}
The import command results in the following:
Error: Get http://localhost/api/v1/namespaces/my_new_namespace: dial tcp [::1]:80: connect: connection refused
It's obvious it's doomed to fail since it's trying to reach localhost
instead of the actual cluster IP and configurations.
Is there any workaround for this use case?
Thanks in advance.
gcloud container clusters get-credentials
to generate a local kubeconfig, I believe theterraform import
command will use your local kubeconfig/context I'm guessing themodule.gke.endpoint
isn't coming back withlocalhost
so it's getting it from somewhere... – Jai Govindani