This is my solution which is highly based on a great AWS employee (Alfredo J).
First, you need to create a Lambda function and add this Python script:
import json
import boto3
import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
client = boto3.client('ecs')
def lambda_handler(event, context):
cluster = event["cluster"]
service_names = event["service_names"]
service_desired_count = int(event["service_desired_count"])
for service_name in service_names.split(","):
response = client.update_service(
cluster=cluster,
service=service_name,
desiredCount=service_desired_count
)
logger.info("Updated {0} service in {1} cluster with desire count set to {2} tasks".format(service_name, cluster, service_desired_count))
return {
'statusCode': 200,
'new_desired_count': service_desired_count
}
The script expects the following variables in a JSON format:
{
"cluster": "clusterName",
"service_names": "service1,service2",
"service_desired_count": "0"
}
Where:
cluster
is the name of the cluster you want to modify.
service_names
is an array for the collection of services.
service_desired_count
is the number of desired services. 0 is to stop the service/s, any other number is to start the service/s.
After everything is created you need to create some rules in Amazon EventBridge (formerly, CloudWatch Events). Here, you define the event you want to trigger based on the schedule that you expect.
If something fails, you need to double-check that the created IAM role has the required policies like: ecs:UpdateService
. You can check this from the logs.