3
votes

I'm using Terraform to provision an EKS cluster (mostly following the example here). At the end of the tutorial, there's a method of outputting the configmap through the terraform output command, and then applying it to the cluster via kubectl apply -f <file>. I'm attempting to wrap this kubectl command into the Terraform file using the kubernetes_config_map resource, however when running Terraform for the first time, I receive the following error:

Error: Error applying plan:

1 error(s) occurred:

* kubernetes_config_map.config_map_aws_auth: 1 error(s) occurred:

* kubernetes_config_map.config_map_aws_auth: the server could not find the requested resource (post configmaps)

The strange thing is, every subsequent terraform apply works, and applies the configmap to the EKS cluster. This leads me to believe it is perhaps a timing issue? I tried to preform a bunch of actions in between the provisioning of the cluster and applying the configmap but that didn't work. I also put an explicit depends_on argument to ensure that the cluster has been fully provisioned first before attempting to apply the configmap.

provider "kubernetes" {
  config_path = "kube_config.yaml"
}

locals {
  map_roles = <<ROLES
- rolearn: ${aws_iam_role.eks_worker_iam_role.arn}
  username: system:node:{{EC2PrivateDNSName}}
  groups:
    - system:bootstrappers
    - system:nodes
ROLES
}

resource "kubernetes_config_map" "config_map_aws_auth" {
  metadata {
    name      = "aws-auth"
    namespace = "kube-system"
  }

  data {
    mapRoles = "${local.map_roles}"
  }

  depends_on = ["aws_eks_cluster.eks_cluster"]
}

I expect for this to run correctly the first time, however it only runs after applying the same file with no changes a second time.

I attempted to get more information by enabling the TRACE debug flag for terraform, however the only output I got was the exact same error as above.

1

1 Answers

0
votes

This seems like a timing issue while bootstrapping your cluster. Your kube-apiserver initially doesn't think there's a configmaps resource.

It's likely that the Role and RoleBinding that it's using the create the ConfigMap has not been fully configured in the cluster to allow it to create a ConfigMap (possibly within the EKS infrastructure) which uses the iam-authenticator and the following policies:

resource "aws_iam_role_policy_attachment" "demo-cluster-AmazonEKSClusterPolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
  role       = "${aws_iam_role.demo-cluster.name}"
}

resource "aws_iam_role_policy_attachment" "demo-cluster-AmazonEKSServicePolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"
  role       = "${aws_iam_role.demo-cluster.name}"
}

The depends Terraform clause will not do much since it seems like the timing is happening within the EKS service.

I suggest you try the terraform-aws-eks module which uses the same resource described in the doc. You can also browse through the code if you'd like to figure out how they solve the problem that you are seeing.