1
votes

I think just a quick sanity check, maybe my eyes are getting confused. I'm breaking a monolithic terraform file into modules.

My main.tf call just two modules, gke for the google kubernetes engine and storage which creates a persistent volume on the cluster created previously.

Module gke has an outputs.tf which outputs the following:

output "client_certificate" {
  value     = "${google_container_cluster.kube-cluster.master_auth.0.client_certificate}"
 sensitive = true
}
output "client_key" {
  value     = "${google_container_cluster.kube-cluster.master_auth.0.client_key}"
 sensitive = true
}
output "cluster_ca_certificate" {
  value     = "${google_container_cluster.kube-cluster.master_auth.0.cluster_ca_certificate}"
 sensitive = true
}
output "host" {
  value     = "${google_container_cluster.kube-cluster.endpoint}"
 sensitive = true
}

Then in the main.tf for the storage module, I have:

client_certificate     = "${base64decode(var.client_certificate)}"
client_key             = "${base64decode(var.client_key)}"
cluster_ca_certificate = "${base64decode(var.cluster_ca_certificate)}"
host     = "${var.host}"

Then in the root main.tf I have the following:

client_certificate = "${module.gke.client_certificate}"
client_key = "${module.gke.client_key}"
cluster_ca_certificate = "${module.gke.cluster_ca_certificate}"
host = "${module.gke.host}"

From what I see, it looks right. The values for the certs, key and host variables should be outputted from the gke module by outputs.tf, picked up by main.tf of root, and then delivered to storage as a regular variable.

Have I got it the wrong way around? Or am I just going crazy, something doesn't seem right.

I get questioned about the variable not being filled when I run a plan.

EDIT:

Adding some additional information including my code.

If I manually add dummy entries for the variables it's asking for I get the following error:

Macbook: $ terraform plan
var.client_certificate
  Enter a value: 1

var.client_key
  Enter a value: 2

var.cluster_ca_certificate
  Enter a value: 3

var.host
  Enter a value: 4
...
(filtered out usual text)
...
 * module.storage.data.google_container_cluster.kube-cluster: 1 error(s) occurred:

* module.storage.data.google_container_cluster.kube-cluster: data.google_container_cluster.kube-cluster: project: required field is not set

It looks like it's complaining that the data.google_container_cluster resource needs the project attribute. But it doesn't it's not a valid resource. It is for the provider, but it's filled out for provider.

Code below:

Folder structure:

root-folder/
├── gke/
│   ├── main.tf
│   ├── outputs.tf
│   ├── variables.tf
├── storage/
│   ├── main.tf
│   └── variables.tf
├── main.tf
├── staging.json
├── terraform.tfvars
└── variables.tf

root-folder/gke/main.tf:

provider "google" {
  credentials = "${file("staging.json")}"
  project     = "${var.project}"
  region      = "${var.region}"
  zone        = "${var.zone}"
}

resource "google_container_cluster" "kube-cluster" {
  name               = "kube-cluster"
  description        = "kube-cluster"
  zone               = "europe-west2-a"
  initial_node_count = "2"
  enable_kubernetes_alpha = "false"
  enable_legacy_abac = "true"

  master_auth {
    username = "${var.username}"
    password = "${var.password}"
  }

  node_config {
    machine_type = "n1-standard-2"
    disk_size_gb = "20"
    oauth_scopes = [
      "https://www.googleapis.com/auth/compute",
      "https://www.googleapis.com/auth/devstorage.read_only",
      "https://www.googleapis.com/auth/logging.write",
      "https://www.googleapis.com/auth/monitoring"
    ]
  }
}

root-folder/gke/outputs.tf:

output "client_certificate" {
  value     = "${google_container_cluster.kube-cluster.master_auth.0.client_certificate}"
 sensitive = true
}
output "client_key" {
  value     = "${google_container_cluster.kube-cluster.master_auth.0.client_key}"
 sensitive = true
}
output "cluster_ca_certificate" {
  value     = "${google_container_cluster.kube-cluster.master_auth.0.cluster_ca_certificate}"
 sensitive = true
}
output "host" {
  value     = "${google_container_cluster.kube-cluster.endpoint}"
 sensitive = true
}

root-folder/gke/variables.tf:

variable "region" {
  description = "GCP region, e.g. europe-west2"
  default = "europe-west2"
}
variable "zone" {
  description = "GCP zone, e.g. europe-west2-a (which must be in gcp_region)"
  default = "europe-west2-a"
}
variable "project" {
  description = "GCP project name"
}
variable "username" {
  description = "Default admin username"
}
variable "password" {
  description = "Default admin password"
}

/root-folder/storage/main.cf:

provider "kubernetes" {
  host     = "${var.host}"
  username = "${var.username}"
  password = "${var.password}"
  client_certificate     = "${base64decode(var.client_certificate)}"
  client_key             = "${base64decode(var.client_key)}"
  cluster_ca_certificate = "${base64decode(var.cluster_ca_certificate)}"
}
data "google_container_cluster" "kube-cluster" {
  name   = "${var.cluster_name}"
  zone   = "${var.zone}"
}
resource "kubernetes_storage_class" "kube-storage-class" {
  metadata {
    name = "kube-storage-class"
  }
  storage_provisioner = "kubernetes.io/gce-pd"
  parameters {
    type = "pd-standard"
  }
}
resource "kubernetes_persistent_volume_claim" "kube-claim" {
  metadata {
    name      = "kube-claim"
  }
  spec {
    access_modes       = ["ReadWriteOnce"]
    storage_class_name = "kube-storage-class"
    resources {
      requests {
        storage = "10Gi"
      }
    }
  }
}

/root/storage/variables.tf:

variable "username" {
  description = "Default admin username."
}
variable "password" {
  description = "Default admin password."
}
variable "client_certificate" {
  description = "Client certificate, output from the GKE/Provider module."
}
variable "client_key" {
  description = "Client key, output from the GKE/Provider module."
}
variable "cluster_ca_certificate" {
  description = "Cluster CA Certificate, output from the GKE/Provider module."
}
variable "cluster_name" {
  description = "Cluster name."
}
variable "zone" {
  description = "GCP Zone"
}
variable "host" {
  description = "Host endpoint, output from the GKE/Provider module."
}

/root-folder/main.tf:

module "gke" {
  source = "./gke"
  project = "${var.project}"
  region = "${var.region}"
  username = "${var.username}"
  password = "${var.password}"
}
module "storage" {
  source = "./storage"
  host = "${module.gke.host}"
  username = "${var.username}"
  password = "${var.password}"
  client_certificate = "${module.gke.client_certificate}"
  client_key = "${module.gke.client_key}"
  cluster_ca_certificate = "${module.gke.cluster_ca_certificate}"
  cluster_name = "${var.cluster_name}"
  zone = "${var.zone}"
}

/root-folder/variables.tf:

variable "project" {}
variable "region" {}
variable "username" {}
variable "password" {}
variable "gc_disk_size" {}
variable "kpv_vol_size" {}
variable "host" {}
variable "client_certificate" {}
variable "client_key" {}
variable "cluster_ca_certificate" {}
variable "cluster_name" {}
variable "zone" {}

I won't paste the contents of my staging.json and terraform.tfvars for obvious reasons :)

1
In your state and in "terraform output" do you have values ?bast
Unable to tell from either as it's not getting that far. As soon as I run a plan it asks for values of the variables. I've put dummy values in, but I have other errors following that which I need to resolve. I see nothing in output and tfstate never populates.jonnybinthemix
Was more of a sanity check to make sure it's all the correct way around. Maybe it's the other issues causing it?jonnybinthemix
From what you've explained, I think it makes sense, but there's something we're missing by not seeing all of the code and all of the error messages.KJH
Apologies, I see that it's difficult to help with limited information. I've updated the original post with the error I get at the moment along with all of the code and the folder structure. Hopefully that will help. Thanks for looking, I appreciate it.jonnybinthemix

1 Answers

4
votes

In your /root-folder/variables.tf, delete the following entries:

variable "host" {}
variable "client_certificate" {}
variable "client_key" {}
variable "cluster_ca_certificate" {}

Those are not variables per se that the Terraform code at the root level needs. Instead, they are being passed as 1 module's output --> input to the 2nd module.