2
votes

I'm architecting aws infrastructure.

Also I'm new at AWS infra.

I have three docker container, So I have to deploy it to ECS.

Below code is step by step that I started my service.

aws ecs create-cluster --cluster-name test

ecs-cli configure --cluster test --region ap-northeast-2 --default-launch-type EC2 --config-name test

ecs-cli configure profile --access-key AWS_ACCESS_KEY --secret-key AWS_SECRET_KEY --profile-name test

ecs-cli up --keypair my_keypair --instance-role TestRole --instance-type t2.micro --cluster-config test --force

ecs-cli compose -f ecs-docker-compose.yml up --create-log-groups --cluster-config test

After run it, one ec2 instance, one task definition, one cluster is created.

And task definition is connected to cluster.

But I wonder that, How can I run multiple task definition on one cluster?

To continuous deployment, my scenario here. (Assume that alb is already connected)

  1. Create new task definition

  2. Connect new task definition to cluster

  3. If new task definition connected succeccfully, Remove old task definition

Overall my question.

  1. My continuous deployment scenario is correct?

  2. How can I connect multiple task definition to one cluster?

Thanks.

1

1 Answers

0
votes

You can do that using a Service and a rolling update here.

A Service is a definition of a task and resources, much like a deployment in kubernetes (if that is more familiar to you). You specify a task definition version for a Service to use, and eventually update that Service to use a different (assuming new) task definition version/revision which will trigger a replacement of that task definition.

So, your principle is the one that rolling update uses, it's just already handled by a Service. No worries of having to manually configure the process.