3
votes

I am a bit very stuck on the step of Launching worker node in the AWS EKS guide. And to be honest, at this point, I don't know what's wrong. When I do kubectl get svc, I get my cluster so that's good news. I have this in my aws-auth-cm.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - rolearn: arn:aws:iam::Account:role/rolename
      username: system:node:{{EC2PrivateDNSName}}
      groups:
        - system:bootstrappers
        - system:nodes

Here is my config in .kube

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: CERTIFICATE
    server: server
  name: arn:aws:eks:region:account:cluster/clustername
contexts:
- context:
    cluster: arn:aws:eks:region:account:cluster/clustername
    user: arn:aws:eks:region:account:cluster/clustername
  name: arn:aws:eks:region:account:cluster/clustername
current-context: arn:aws:eks:region:account:cluster/clustername
kind: Config
preferences: {}
users:
- name: arn:aws:eks:region:account:cluster/clustername
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - token
      - -i
      - clustername
      command: aws-iam-authenticator.exe

I have launched an EC2 instance with the advised AMI.

Some things to note :

  • I launched my cluster with the CLI,
  • I created some Key Pair,
  • I am not using the Cloudformation Stack,
  • I attached those policies to the role of my EC2 : AmazonEKS_CNI_Policy, AmazonEC2ContainerRegistryReadOnly, AmazonEKSWorkerNodePolicy.

It is my first attempt at kubernetes and EKS, so please keep that in mind :). Thanks for your help!

2

2 Answers

1
votes

Your config file and auth file looks right. Maybe there is some issue with the security group assignments? Can you share the exact steps that you followed to create the cluster and the worker nodes? And any special reason why you had to use the CLI instead of the console? I mean if it's your first attempt at EKS, then you should probably try to set up a cluster using the console at least once.

0
votes

Sometimes for whatever reason aws_auth configmap does not apply automatically. So we need to add them manually. I had this issue, so leaving it here in case it helps someone.

  1. Check to see if you have already applied the aws-auth ConfigMap.

    kubectl describe configmap -n kube-system aws-auth

If you receive an error stating "Error from server (NotFound): configmaps "aws-auth" not found", then proceed

  1. Download the configuration map.

    curl -o aws-auth-cm.yaml https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2020-10-29/aws-auth-cm.yaml

  2. Open the file with your favorite text editor. Replace <ARN of instance role (not instance profile)> with the Amazon Resource Name (ARN) of the IAM role associated with your nodes, and save the file.

  3. Apply the configuration.

    kubectl apply -f aws-auth-cm.yaml

  4. Watch the status of your nodes and wait for them to reach the Ready status.

    kubectl get nodes --watch

You can also go to your aws console and find the worker node being added.

Find more info here