TL;DR
Two solutions:
- Create two separate modules with Terraform
Use interpolations and depends_on between the code that creates your Kubernetes cluster and the kubernetes resources:
resource "kubernetes_service" "example" {
metadata {
name = "my-service"
}
depends_on = ["aws_vpc.kubernetes"]
}
resource "aws_vpc" "kubernetes" {
...
}
When destroying resources
You are encountering a dependency lifecycle issue
PS: I don't know the code you've used to create / provision your Kubernetes cluster but I guess it looks like this
- Write code for the Kubernetes cluster (creates a VPC)
- Apply it
- Write code for provisionning Kubernetes (create an Service that creates an ELB)
- Apply it
- Try to destroy everything => Error
What is happenning is that by creating a LoadBalancer Service, Kubernetes will provision an ELB on AWS. But Terraform doesn't know that and there is no link between the ELB created and any other resources managed by Terraform.
So when terraform tries to destroy the resources in the code, it will try to destroy the VPC. But it can't because there is an ELB inside that VPC that terraform doesn't know about.
The first thing would be to make sure that Terraform "deprovision" the Kubernetes cluster and then destroy the cluster itself.
Two solutions here:
Use different modules so there is no dependency lifecycle. For example the first module could be k8s-infra
and the other could be k8s-resources
. The first one manages all the squeleton of Kubernetes and is apply first / destroy last. The second one manages what is inside the cluster and is apply last / destroy first.
Use the depends_on
parameter to write the dependency lifecycle explicitly
When creating resources
You might also ran into a dependency issue when terraform apply
cannot create resources even if nothing is applied yet. I'll give an other example with a postgres
- Write code to create an RDS PostgreSQL server
- Apply it with Terraform
- Write code, in the same module, to provision that RDS instance with the postgres terraform provider
- Apply it with Terraform
- Destroy everything
- Try to apply everything => ERROR
By debugging Terraform a bit I've learned that all the providers are initialized at the beggining of the plan
/ apply
so if one has an invalid config (wrong API keys / unreachable endpoint) then Terraform will fail.
The solution here is to use the target parameter of a plan
/ apply
command.
Terraform will only initialize providers that are related to the resources that are applied.
- Apply the RDS code with the AWS provider:
terraform apply -target=aws_db_instance
- Apply everything
terraform apply
. Because the RDS instance is already reachable, the PostgreSQL provider can also initiate itself