2
votes

I'am using an EKS cluster and I'd like to use cluster-autoscaler with it.

cluster-autoscaler version : 1.2.2

EKS kubernetes version : 1.10

So I have a brand new EKS cluster (with existing nodes) and I'd like to add new worker nodes with the CloudFormation script (as explained here).

This script provided 3 new t2.small EC2 into an auto-scaling group. Because I'd like to use nodeSelector, I have tagged the ASG as explained here :

If you are using nodeSelector you need to tag the ASG with a node-template key "k8s.io/cluster-autoscaler/node-template/label/"

In my AWS console I see my tag on my ASGlike that :

tag

So, my problem is that I see news nodes in kubernetes, but the tag k8s.io/cluster-autoscaler/node-template/label/project has not been assigned to the labels of these nodes. I expected to see a label project=asg2.

.... I don't know what I have missed.

The only node labels I see are :

beta.kubernetes.io/arch=amd64 beta.kubernetes.io/instance-type=t2.small beta.kubernetes.io/os=linux failure-domain.beta.kubernetes.io/region=us-east-1 failure-domain.beta.kubernetes.io/zone=us-east-1c kubernetes.io/hostname=ip-xxx-xxx-xxx-xxx.ec2.internal

Here is my launch command of the cluster-autoscaler pod :

"command": [ "./cluster-autoscaler", "--v=4", "--stderrthreshold=info", "--cloud-provider=aws", "--skip-nodes-with-local-storage=false", "--nodes=1:3:at-eks-worker-nodes-asg2-NodeGroup-1QOBK4RZ42IZI" ] What I've missed?

Thank you for your help

1
Did you find a solution to this problem? Dimitri's answer is useful, but I'd prefer a solution that doesn't require an extra script to be running alongside the auto scaling group to make sure new nodes get tagged properly.Brannon

1 Answers

0
votes

If you are using the latest cloud formation from AWS, the one with the user_data portion that looks like

#!/bin/bash
set -o xtrace
/etc/eks/bootstrap.sh ${ClusterName} ${BootstrapArguments}
/opt/aws/bin/cfn-signal --exit-code $? \
    --stack  ${AWS::StackName} \
    --resource NodeGroup  \
    --region ${AWS::Region}

You can tie into that. You just need to wait for the node to be registered with the cluster, and then you can add whatever labels you need. Keep in mind that $(hostname) in ec2 is the name of the node in kubernetes. For example:

#!/bin/bash
set -o xtrace
/etc/eks/bootstrap.sh ${ClusterName} ${BootstrapArguments}
until kubectl --kubeconfig=/var/lib/kubelet/kubeconfig get nodes $(hostname) > /dev/null 2>&1
do
    sleep 1
done
kubectl --kubeconfig=/var/lib/kubelet/kubeconfig label nodes $(hostname) thisis=mylabel
/opt/aws/bin/cfn-signal --exit-code $? \
    --stack  ${AWS::StackName} \
    --resource NodeGroup  \
    --region ${AWS::Region}

If you are using the older cloud formation templates, a possible answer is here

Of course, should AWS change things around again, these may no longer work.

Hope that helps!