6
votes

I made some experiments with terraform, kubernetes, cassandra and elassandra, I separated all by modules, but now I can't delete a specific module.

I'm using gitlab-ci, and I store the terraform states on a AWS backend. This mean that, every time that I change the infrastructure in terraform files, after a git push, the infrastructure will be updated with an gitlab-ci that run terraform init, terraform plan and terraform apply.

My terraform main file is this:

# main.tf
##########################################################################################################################################
# BACKEND                                                                                                                                #
##########################################################################################################################################

terraform {
  backend "s3" {}
}

data "terraform_remote_state" "state" {
  backend = "s3"
  config {
    bucket         = "${var.tf_state_bucket}"
    dynamodb_table = "${var.tf_state_table}"
    region         = "${var.aws-region}"
    key            = "${var.tf_key}"
  }
}

##########################################################################################################################################
# Modules                                                                                                                                #
##########################################################################################################################################

# Cloud Providers: -----------------------------------------------------------------------------------------------------------------------
module "gke" {
  source    = "./gke"
  project   = "${var.gcloud_project}"
  workspace = "${terraform.workspace}"
  region    = "${var.region}"
  zone      = "${var.gcloud-zone}"
  username  = "${var.username}"
  password  = "${var.password}"
}

module "aws" {
  source   = "./aws-config"
  aws-region      = "${var.aws-region}"
  aws-access_key  = "${var.aws-access_key}"
  aws-secret_key  = "${var.aws-secret_key}"
}

# Elassandra: ----------------------------------------------------------------------------------------------------------------------------
module "k8s-elassandra" {
  source   = "./k8s-elassandra"

  host     = "${module.gke.host}"
  username = "${var.username}"
  password = "${var.password}"

  client_certificate     = "${module.gke.client_certificate}"
  client_key             = "${module.gke.client_key}"
  cluster_ca_certificate = "${module.gke.cluster_ca_certificate}"
}

# Cassandra: ----------------------------------------------------------------------------------------------------------------------------
 module "k8s-cassandra" { 
   source   = "./k8s-cassandra"

   host     = "${module.gke.host}"
   username = "${var.username}"
   password = "${var.password}"

   client_certificate     = "${module.gke.client_certificate}"
   client_key             = "${module.gke.client_key}"
   cluster_ca_certificate = "${module.gke.cluster_ca_certificate}"
 }

This is a tree of my directory:

.
├── aws-config
│   ├── terraform_s3.tf
│   └── variables.tf
├── gke
│   ├── cluster.tf
│   ├── gcloud_access_key.json
│   ├── gcp.tf
│   └── variables.tf
├── k8s-cassandra
│   ├── k8s.tf
│   ├── limit_ranges.tf
│   ├── quotas.tf
│   ├── services.tf
│   ├── stateful_set.tf
│   └── variables.tf
├── k8s-elassandra
│   ├── k8s.tf
│   ├── limit_ranges.tf
│   ├── quotas.tf
│   ├── services.tf
│   ├── stateful_set.tf
│   └── variables.tf
├── main.tf
└── variables.tf

I'm blocked here:

-> I want to remove the module k8s-cassandra

  • If I comment ou delete the module in main.tf (module "k8s-cassandra" {...), I receive this error:

TERRAFORM PLAN... Acquiring state lock. This may take a few moments... Releasing state lock. This may take a few moments...

Error: module.k8s-cassandra.kubernetes_stateful_set.cassandra: configuration for module.k8s-cassandra.provider.kubernetes is not present; a provider configuration block is required for all operations

  • If I insert terraform destroy -target=module.k8s-cassandra -auto-approve between terraform init and terraform plan stills not working.

Anyone can help me, please? Thanks :)

1
where is the kubernetes provider defined?SomeGuyOnAComputer
@SomeGuyOnAComputer, I'm defining in both files k8s.tf, provider "kubernetes" { version="~> 1.5.0" host= "${var.host}" username="${var.username}" password="${var.password}" client_certificate="${base64decode(var.client_certificate)}" client_key="${base64decode(var.client_key)}" cluster_ca_certificate="${base64decode(var.cluster_ca_certificate)}" } resource "kubernetes_namespace" "terraform-elassandra-namespace" { metadata { annotations { name = "terraform-elassandra-namespace" } labels { app = "elassandra" } name = "terraform-elassandra-namespace" } }Rui Martins
Try only defining it in the parent main.tfSomeGuyOnAComputer
If you commented the code, uncomment it, then do the terraform destroy with the target, then comment the code and you should be set...night-gold
Are there any codes to reference the resource to module.k8s-cassandra.XXX?BMW

1 Answers

14
votes

The meaning of this error message is that Terraform was relying on a provider "kubernetes" block inside the k8s-cassandra module in order to configure the AWS provider. By removing the module from source code, you've implicitly removed that configuration and so the existing objects already present in the state cannot be deleted -- the provider configuration needed to do that is not present.

Although Terraform allows provider blocks inside child modules for flexibility, the documentation recommends keeping all of them in the root module and passing the provider configurations by name into the child modules using a providers map, or by automatic inheritance by name.

provider "kubernetes" {
  # global kubernetes provider config
}

module "k8s-cassandra" {
  # ...module arguments...

  # provider "kubernetes" is automatically inherited by default, but you
  # can also set it explicitly:
  providers = {
    "kubernetes" = "kubernetes"
  }
}

To get out of the conflict situation you have already though, the answer is to temporarily restore the module "k8s-cassandra" block and then destroy the objects it is managing before removing it, using the -target option:

terraform destroy -target module.k8s-cassandra

Once all of the objects managed by that module have been destroyed and removed from the state, you can then safely remove the module "k8s-cassandra" block from configuration.

To prevent this from happening again, you should rework the root and child modules here so that the provider configurations are all in the root module, and child modules only inherit provider configurations passed in from the root. For more information, see Providers Within Modules in the documentation.