2
votes

I'm after an example that would do the following:

  1. Create a Kubernetes cluster on GKE via Terraform's google_container_cluster
  2. ... and continue creating namespaces in it, I suppose via kubernetes_namespace

The thing I'm not sure about is how to connect the newly created cluster and the namespace definition. For example, when adding google_container_node_pool, I can do something like cluster = "${google_container_cluster.hosting.name}" but I don't see anything similar for kubernetes_namespace.

1
Not an answer because I haven't played with this, but it looks like the google_container_cluster provider exports the necessary data: terraform.io/docs/providers/google/r/… to be able to connect and auth to the cluster (e.g. IP and cert data). That data can populate a credential block for the k8s provider: terraform.io/docs/providers/kubernetes/…. The k8s provider uses the k8s go client internally to issue authenticated api calls to create namespaces and perform other cluster ops.Jonah Benton

1 Answers

13
votes

In theory it is possible to reference resources from the GCP provider in K8S (or any other) provider in the same way you'd reference resources or data sources within the context of a single provider.

provider "google" {
  region = "us-west1"
}

data "google_compute_zones" "available" {}

resource "google_container_cluster" "primary" {
  name = "the-only-marcellus-wallace"
  zone = "${data.google_compute_zones.available.names[0]}"
  initial_node_count = 3

  additional_zones = [
    "${data.google_compute_zones.available.names[1]}"
  ]

  master_auth {
    username = "mr.yoda"
    password = "adoy.rm"
  }

  node_config {
    oauth_scopes = [
      "https://www.googleapis.com/auth/compute",
      "https://www.googleapis.com/auth/devstorage.read_only",
      "https://www.googleapis.com/auth/logging.write",
      "https://www.googleapis.com/auth/monitoring"
    ]
  }
}

provider "kubernetes" {
  host = "https://${google_container_cluster.primary.endpoint}"
  username = "${google_container_cluster.primary.master_auth.0.username}"
  password = "${google_container_cluster.primary.master_auth.0.password}"
  client_certificate = "${base64decode(google_container_cluster.primary.master_auth.0.client_certificate)}"
  client_key = "${base64decode(google_container_cluster.primary.master_auth.0.client_key)}"
  cluster_ca_certificate = "${base64decode(google_container_cluster.primary.master_auth.0.cluster_ca_certificate)}"
}

resource "kubernetes_namespace" "n" {
  metadata {
    name = "blablah"
  }
}

However in practice it may not work as expected due to a known core bug breaking cross-provider dependencies, see https://github.com/hashicorp/terraform/issues/12393 and https://github.com/hashicorp/terraform/issues/4149 respectively.

The alternative solution would be:

  1. Use 2-staged apply and target the GKE cluster first, then anything else that depends on it, i.e. terraform apply -target=google_container_cluster.primary and then terraform apply
  2. Separate out GKE cluster config from K8S configs, give them completely isolated workflow and connect those via remote state.

/terraform-gke/main.tf

terraform {
  backend "gcs" {
    bucket  = "tf-state-prod"
    prefix  = "terraform/state"
  }
}

provider "google" {
  region = "us-west1"
}

data "google_compute_zones" "available" {}

resource "google_container_cluster" "primary" {
  name = "the-only-marcellus-wallace"
  zone = "${data.google_compute_zones.available.names[0]}"
  initial_node_count = 3

  additional_zones = [
    "${data.google_compute_zones.available.names[1]}"
  ]

  master_auth {
    username = "mr.yoda"
    password = "adoy.rm"
  }

  node_config {
    oauth_scopes = [
      "https://www.googleapis.com/auth/compute",
      "https://www.googleapis.com/auth/devstorage.read_only",
      "https://www.googleapis.com/auth/logging.write",
      "https://www.googleapis.com/auth/monitoring"
    ]
  }
}

output "gke_host" {
  value = "https://${google_container_cluster.primary.endpoint}"
}

output "gke_username" {
  value = "${google_container_cluster.primary.master_auth.0.username}"
}

output "gke_password" {
  value = "${google_container_cluster.primary.master_auth.0.password}"
}

output "gke_client_certificate" {
  value = "${base64decode(google_container_cluster.primary.master_auth.0.client_certificate)}"
}

output "gke_client_key" {
  value = "${base64decode(google_container_cluster.primary.master_auth.0.client_key)}"
}

output "gke_cluster_ca_certificate" {
  value = "${base64decode(google_container_cluster.primary.master_auth.0.cluster_ca_certificate)}"
}

Here we're exposing all the necessary configuration via outputs and use backend to store the state, along with these outputs in a remote location, GCS in this case. This enables us to reference it in the config below.

/terraform-k8s/main.tf

data "terraform_remote_state" "foo" {
  backend = "gcs"
  config {
    bucket  = "tf-state-prod"
    prefix  = "terraform/state"
  }
}

provider "kubernetes" {
  host = "https://${data.terraform_remote_state.foo.gke_host}"
  username = "${data.terraform_remote_state.foo.gke_username}"
  password = "${data.terraform_remote_state.foo.gke_password}"
  client_certificate = "${base64decode(data.terraform_remote_state.foo.gke_client_certificate)}"
  client_key = "${base64decode(data.terraform_remote_state.foo.gke_client_key)}"
  cluster_ca_certificate = "${base64decode(data.terraform_remote_state.foo.gke_cluster_ca_certificate)}"
}

resource "kubernetes_namespace" "n" {
  metadata {
    name = "blablah"
  }
}

What may or may not be obvious here is that cluster has to be created/updated before creating/updating any K8S resources (if such update relies on updates of the cluster).


Taking the 2nd approach is generally advisable either way (even when/if the bug was not a factor and cross-provider references worked) as it reduces the blast radius and defines much clearer responsibility. It's (IMO) common for such deployment to have 1 person/team responsible for managing the cluster and a different one for managing K8S resources.

There may certainly be overlaps though - e.g. ops wanting to deploy logging & monitoring infrastructure on top of a fresh GKE cluster, so cross provider dependencies aim to satisfy such use cases. For that reason I'd recommend subscribing to the GH issues mentioned above.