I'm using Terraform to provision an EKS cluster (mostly following the example here). At the end of the tutorial, there's a method of outputting the configmap through the terraform output
command, and then applying it to the cluster via kubectl apply -f <file>
. I'm attempting to wrap this kubectl
command into the Terraform file using the kubernetes_config_map
resource, however when running Terraform for the first time, I receive the following error:
Error: Error applying plan:
1 error(s) occurred:
* kubernetes_config_map.config_map_aws_auth: 1 error(s) occurred:
* kubernetes_config_map.config_map_aws_auth: the server could not find the requested resource (post configmaps)
The strange thing is, every subsequent terraform apply
works, and applies the configmap to the EKS cluster. This leads me to believe it is perhaps a timing issue? I tried to preform a bunch of actions in between the provisioning of the cluster and applying the configmap but that didn't work. I also put an explicit depends_on
argument to ensure that the cluster has been fully provisioned first before attempting to apply the configmap.
provider "kubernetes" {
config_path = "kube_config.yaml"
}
locals {
map_roles = <<ROLES
- rolearn: ${aws_iam_role.eks_worker_iam_role.arn}
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
ROLES
}
resource "kubernetes_config_map" "config_map_aws_auth" {
metadata {
name = "aws-auth"
namespace = "kube-system"
}
data {
mapRoles = "${local.map_roles}"
}
depends_on = ["aws_eks_cluster.eks_cluster"]
}
I expect for this to run correctly the first time, however it only runs after applying the same file with no changes a second time.
I attempted to get more information by enabling the TRACE
debug flag for terraform, however the only output I got was the exact same error as above.