I have enabled capacity provider for ECS cluster and had set min, max and desired count on EC2 auto-scaling group as 1,3 and 2 respectively. Also I have enabled auto-scaling for ECS task with min, max and desired count as 2,6 and 2 respectively.
each of these two task were launched on two separate instance when I deployed the whole setup using terraform, when the load test was run, ECS task and EC2 instance were successfully scaled-out to 6 and 3. But after the load test got completed ECS task were scaled-in but EC2-instance is still not removed.
Also does the target_capacity in managed_scaling indicate the threshold used to create auto scaling policy for EC2 cluster?
resource "aws_autoscaling_group" "asg" {
...
min_size = var.asg_min_size
max_size = var.asg_max_size
desired_capacity = var.asg_desired_capacity
protect_from_scale_in = true
tags = [
{
"key" = "Name"
"value" = local.name
"propagate_at_launch" = true
},
{
"key" = "AmazonECSManaged"
"value" = ""
"propagate_at_launch" = true
}
]
}
resource "aws_ecs_capacity_provider" "capacity_provider" {
name = local.name
auto_scaling_group_provider {
auto_scaling_group_arn = aws_autoscaling_group.asg.arn
managed_termination_protection = "ENABLED"
managed_scaling {
maximum_scaling_step_size = 4
minimum_scaling_step_size = 1
status = "ENABLED"
target_capacity = 70
}
}
provisioner "local-exec" {
when = destroy
command = "aws ecs put-cluster-capacity-providers --cluster ${self.name} --capacity-providers [] --default-capacity-provider-strategy []"
}
}
resource "aws_ecs_cluster" "cluster" {
name = local.name
capacity_providers = [
aws_ecs_capacity_provider.capacity_provider.name,
]
tags = merge(
{
"Name" = local.name,
"Environment" = var.environment,
"Description" = var.description,
"Service" = var.service,
},
var.tags
)
}