8
votes

In my terraform config files I create a Kubernetes cluster on GKE and when created, set up a Kubernetes provider to access said cluster and perform various actions such as setting up namespaces.

The problem is that some new namespaces were created in the cluster without terraform and now my attempts to import these namespaces into my state seem fail due to inability to connect to the cluster, which I believe is due to the following (taken from Terraform's official documentation of the import command):

The only limitation Terraform has when reading the configuration files is that the import provider configurations must not depend on non-variable inputs. For example, a provider configuration cannot depend on a data source.

The command I used to import the namespaces is pretty straightforward:

terraform import kubernetes_namespace.my_new_namespace my_new_namespace

I also tried using the -provdier="" and -config="" but to no avail.

My Kubernetes provider configuration is this:

provider "kubernetes" {
  version = "~> 1.8"

  host  = module.gke.endpoint
  token = data.google_client_config.current.access_token

  cluster_ca_certificate = base64decode(module.gke.cluster_ca_certificate)
}

An example for a namespace resource I am trying to import is this:

resource "kubernetes_namespace" "my_new_namespace" {
  metadata {
    name = "my_new_namespace"
  }
}

The import command results in the following:

Error: Get http://localhost/api/v1/namespaces/my_new_namespace: dial tcp [::1]:80: connect: connection refused

It's obvious it's doomed to fail since it's trying to reach localhost instead of the actual cluster IP and configurations.

Is there any workaround for this use case?

Thanks in advance.

1
You could temporarily hardcode the provider config from the known outputs while you import the resources and then revert your change when you're done.ydaetskcoR
can you reach your cluste via the api? kubectl get <something> works?JohnMops
Still having this issue in 2021, if anyone has the answer that would be awesome... :Dh1fra
Yes, it seems it is complete dead end to look for a sound solutionKat Lim Ruiz
Is it grabbing localhost from your local kubectl kubeconfig? If you can do a gcloud container clusters get-credentials to generate a local kubeconfig, I believe the terraform import command will use your local kubeconfig/context I'm guessing the module.gke.endpoint isn't coming back with localhost so it's getting it from somewhere...Jai Govindani

1 Answers

0
votes

(1) Create an entry in your kubeconfig file for your GKE cluster.

gcloud container clusters get-credentials cluster-name

see: https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl#generate_kubeconfig_entry

(2) Point terraform Kubernetes provider to your kubeconfig:

provider "kubernetes" {
  config_path = "~/.kube/config"
}