1
votes

I am deploying AWS Elastic Kubernetes Cluster on AWS Cloud. While deploying the cluster from my local machine I am facing a small error, even we can't say exactly it is an error.

So when I am deploying eks cluster using terraform charts from my local machine, it's deploying all the infra requirement on AWS, but when it has to deploy the cluster it is tying to deploy through kubectl, but kubectl is not configured with the newly created cluster, then the terraform throwing an error.

I easily solve this error by binding kubectl with newly created cluster with the below command, but I don't want to do it manually, is there any way in then that I can configure kubectl with the same.

Command - aws eks --region us-west-2 update-kubeconfig --name clustername

FYI - I am using AWS CLI.

2

2 Answers

1
votes

You can use terraform local-exec provisioner.

    resource "null_resource" "kubectl" {
       depends_on = <CLUSTER_IS_READY>
       provisioner "local-exec" {
          command = "aws eks --region us-west-2 update-kubeconfig --name clustername"
          }
       }
 }
0
votes

You need to have a few things in place:

  • dependency on the cluster resource/module: awe_eks_cluster, terraform-aws-modules/eks/aws, etc
  • when to regenerate (in specific cases vs always vs never ie generate only once)
  • which shell to use (bash as it is most common)
  • error exit if fail (set -e)
  • wait for cluster to be ready (aws eks wait)
  • update kubeconfig

Eg I use

resource "null_resource" "merge_kubeconfig" {
  triggers = {
    always = timestamp()
  }

  depends_on = [module.eks_cluster]

  provisioner "local-exec" {
    interpreter = ["/bin/bash", "-c"]
    command = <<EOT
      set -e
      echo 'Applying Auth ConfigMap with kubectl...'
      aws eks wait cluster-active --name '${local.cluster_name}'
      aws eks update-kubeconfig --name '${local.cluster_name}' --alias '${local.cluster_name}-${var.region}' --region=${var.region}
    EOT
  }
}

Note that there are several things that could cause the kubeconfig to require re-merging such as new cert, new users etc and it can be a bit tricky. The cost is minimal to always re-merge if terraform apply is run, hence trigger on timestamp. Adjust as necessary, eg I have seen this used:

triggers = {
    cluster_updated                     = join("", aws_eks_cluster.default.*.id)
    worker_roles_updated                = local.map_worker_roles_yaml
    additional_roles_updated            = local.map_additional_iam_roles_yaml
    additional_users_updated            = local.map_additional_iam_users_yaml
    additional_aws_accounts_updated     = local.map_additional_aws_accounts_yaml
    configmap_auth_file_content_changed = join("", local_file.configmap_auth.*.content)
    configmap_auth_file_id_changed      = join("", local_file.configmap_auth.*.id)
  }