1
votes

I followed "https://www.terraform.io/docs/providers/aws/guides/eks-getting-started.html" to create an EKS cluster using terraform.

I was able to create a config map successfully but i am unable to get the node details -

$ ./kubectl_1.10.3_darwin get nodes 
No resources found.

Service details -

$ ./kubectl_1.10.3_darwin get services
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.100.0.1   <none>        443/TCP   2h

Kubectl logs on nodes -

Aug  5 09:14:32 ip-172-31-18-205 kubelet: I0805 09:14:32.617738   25463 aws.go:1026] Building AWS cloudprovider
Aug  5 09:14:32 ip-172-31-18-205 kubelet: I0805 09:14:32.618168   25463 aws.go:988] Zone not specified in configuration file; querying AWS metadata service
Aug  5 09:14:32 ip-172-31-18-205 kubelet: E0805 09:14:32.794914   25463 tags.go:94] Tag "KubernetesCluster" nor "kubernetes.io/cluster/..." not found; Kubernetes may behave unexpectedly.
Aug  5 09:14:32 ip-172-31-18-205 kubelet: F0805 09:14:32.795622   25463 server.go:233] failed to run Kubelet: could not init cloud provider "aws": AWS cloud failed to find ClusterID
Aug  5 09:14:32 ip-172-31-18-205 systemd: kubelet.service: main process exited, code=exited, status=255/n/a
Aug  5 09:14:32 ip-172-31-18-205 systemd: Unit kubelet.service entered failed state.
Aug  5 09:14:32 ip-172-31-18-205 systemd: kubelet.service failed.

AWS getting started documentation doesn't mention any tags related information "https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html".

After a while I found out that I missed to put resource tags like "kubernetes.io/cluster/*" to my networking resources.

My networking resources are pre-created, I use remote states to fetch the required details. I believe that I can either add tags to it OR create a new VPC env.

Is there any alternate way to solve this without adding tags or provisioning new resources?

2

2 Answers

4
votes

Make sure you add a similar tag as below to your VPCs, Subnets & ASGs -

"kubernetes.io/cluster/${CLUSTER_NAME}" = "shared"

NOTE: The usage of the specific kubernetes.io/cluster/* resource tags below are required for EKS and Kubernetes to discover and manage networking resources.
NOTE: The usage of the specific kubernetes.io/cluster/* resource tag below is required for EKS and Kubernetes to discover and manage compute resources. - Terraform docs

I had missed propagating tags using auto-scaling groups on worker nodes. I added below code to ASG terraform module & it started working, at least the nodes were able to connect to the master cluster. You also need to add the tag to VPC & Subnets for EKS and Kubernetes to discover and manage networking resources.

For VPC -

locals {
  cluster_tags = {
    "kubernetes.io/cluster/${var.project}-${var.env}-cluster" = "shared"
  }
}

resource "aws_vpc" "myvpc" {
  cidr_block = "${var.vpc_cidr}"
  enable_dns_hostnames = true

  tags = "${merge(map("Name", format("%s-%s-vpcs", var.project, var.env)), var.default_tags, var.cluster_tags)}"
}

resource "aws_subnet" "private_subnet" {
  count = "${length(var.private_subnets)}"

  vpc_id            = "${aws_vpc.myvpc.id}"
  cidr_block        = "${var.private_subnets[count.index]}"
  availability_zone = "${element(var.azs, count.index)}"

  tags = "${merge(map("Name", format("%s-%s-pvt-%s", var.project, var.env, element(var.azs, count.index))), var.default_tags, var.cluster_tags)}"
}

resource "aws_subnet" "public_subnet" {
  count = "${length(var.public_subnets)}"

  vpc_id            = "${aws_vpc.myvpc.id}"
  cidr_block        = "${var.public_subnets[count.index]}"
  availability_zone = "${element(var.azs, count.index)}"
  map_public_ip_on_launch = "true"

  tags = "${merge(map("Name", format("%s-%s-pub-%s", var.project, var.env, element(var.azs, count.index))), var.default_tags, var.cluster_tags)}"
}

For ASGs -

resource "aws_autoscaling_group" "asg-node" {
    name = "${var.project}-${var.env}-asg-${aws_launch_configuration.lc-node.name}"

    vpc_zone_identifier = ["${var.vpc_zone_identifier}"]
    min_size  = 1
    desired_capacity  = 1
    max_size  = 1
    target_group_arns = ["${var.target_group_arns}"]
    default_cooldown= 100
    health_check_grace_period = 100
    termination_policies = ["ClosestToNextInstanceHour", "NewestInstance"]
    health_check_type="EC2"
    depends_on = ["aws_launch_configuration.lc-node"]
    launch_configuration = "${aws_launch_configuration.lc-node.name}"
    lifecycle {
    create_before_destroy = true
    }

    tags = ["${data.null_data_source.tags.*.outputs}"]
    tags = [
      {
      key                 = "Name"
      value               = "${var.project}-${var.env}-asg-eks"
      propagate_at_launch = true
       },
      {
      key                 = "role"
      value               = "eks-worker"
      propagate_at_launch = true
       },
       {
      key                 = "kubernetes.io/cluster/${var.project}-${var.env}-cluster"
      value               = "owned"
      propagate_at_launch = true
      }
   ]
}

I was able to deploy a sample application post above changes.

PS - Answering this since AWS EKS getting started documentation doesn't have these instructions very clear & people trying to create ASGs manually may fall into this issue. This might help others save their time.

0
votes

I tried to summarize below all the resources that requires tagging - I hope I haven't missed something.


Tagging Network resources

(Summary of this doc).

1) VPC tagging requirement

When you create an Amazon EKS cluster earlier than version 1.15, Amazon EKS tags the VPC containing the subnets you specify in the following way so that Kubernetes can discover it:

Key                                       Value

kubernetes.io/cluster/<cluster-name>      shared

Key: The value matches your Amazon EKS cluster's name.
Value: The shared value allows more than one cluster to use this VPC.

2) Subnet tagging requirement

When you create your Amazon EKS cluster, Amazon EKS tags the subnets you specify in the following way so that Kubernetes can discover them:

Note: All subnets (public and private) that your cluster uses for resources should have this tag.

Key                                     Value
kubernetes.io/cluster/<cluster-name>    shared

Key: The value matches your Amazon EKS cluster.
Value: The shared value allows more than one cluster to use this subnet.

3) Private subnet tagging requirement for internal load balancers

Private subnets must be tagged in the following way so that Kubernetes knows it can use the subnets for internal load balancers. If you use an Amazon EKS AWS CloudFormation template to create...

Key                              Value

kubernetes.io/role/internal-elb  1

4) Public subnet tagging option for external load balancers

You must tag the public subnets in your VPC so that Kubernetes knows to use only those subnets for external load balancers instead of choosing a public subnet in each Availability Zone (in lexicographical order by subnet ID). If you use an Amazon EKS AWS CloudFormation template...

Key                      Value

kubernetes.io/role/elb   1

Tagging Auto Scaling group

(Summary of this doc).

The Cluster Autoscaler requires the following tags on your node group Auto Scaling groups so that they can be auto-discovered.

If you used the previous eksctl commands to create your node groups, these tags are automatically applied. If not, you must manually tag your Auto Scaling groups with the following tags.

Key                                       Value

k8s.io/cluster-autoscaler/<cluster-name>  owned

k8s.io/cluster-autoscaler/enabled         true

Tagging Security groups

(Taken from the end of this doc).

If you have more than one security group associated to your nodes, then one of the security groups must have the following tag applied to it. If you have only one security group associated to your nodes, then the tag is optional.

Key                                   Value

kubernetes.io/cluster/<cluster-name>  owned