0
votes

I have the following configuration:

When I change POD spec/config I see the next:

  • new instance creation (2 running)
    • 2 pods in state running and 1 in pending
  • new instance created (3 running)
    • 2 pods in state running (age updated for both)
    • 2 pods running on instances in one az
    • 2 running instances in 1 az after scale-down timeout

Is there a proper way to configure autoscaler to create instances/pods in different 2 azs?

1

1 Answers

1
votes

There's two levels to this. The cluster autoscaler instance location and then the kubernetes pod location.

Create an auto scaling group for each availability zone:

  • Cluster autoscaler does not support Auto Scaling Groups which span multiple Availability Zones; instead you should use an Auto Scaling Group for each Availability Zone and enable the --balance-similar-node-groups feature. If you do use a single Auto Scaling Group that spans multiple Availability Zones you will find that AWS unexpectedly terminates nodes without them being drained because of the rebalancing feature.

Then for kubernetes, use pod anti-affinity on the EKS populated failure-domain.beta.kubernetes.io/zone label.

EBS Volumes don't span availability zones so if you are using persistent volumes you may get stuck with pods in one zone or worst case pods that won't schedule.