0
votes

I am creating a Task on ECS (Fargate) with 3 containers inside and two of the containers are essential. And I found if the non-essential container fail to start, the whole task will be failed. Is it expected?

In the task definition https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html, it says:

If the essential parameter of a container is marked as true, and that container fails or stops for any reason, all other containers that are part of the task are stopped. If the essential parameter of a container is marked as false, then its failure does not affect the rest of the containers in a task. If this parameter is omitted, a container is assumed to be essential.

I have set essential to true in my major container and other containers as false. I wonder why this flag doesn't work in my ECS.

I know that in most cases people create one container per task. My in my case, I have multiple containers in one task. I don't consider an option to move my containers to separate tasks. I am asking my the essential in container doesn't work as expected.

Below is a screenshot of my task definition and you can see that the first container is not essential which means its failure shouldn't cause task failure.

enter image description here

1

1 Answers

2
votes

update:

If you have two containers, suppose - Container A - Container B

You think Container B is not required for A, if B is down you do not want to restart A, all you need to set

    "essential": false,

in the B container task definition.

This work for me with ECS agent 1.36.2

This is expected behaviour of ECS in case of a single task definition, as all the task sharing same task definitions so either you scale up or one container down to due to some reason all will down, if you scale one, all the containers will scale up.

Normally that was the way to work with legacy linking in your containers now you've services discovery so better to use services discover and put your containers in separate task definitions.

Also, this is not suggested architecture for containers.

Your entire application stack does not need to exist on a single task definition, and in most cases it should not. Your application can span multiple task definitions by combining related containers into their own task definitions, each representing a single component. For more information,

task_definitions

So the question is when you should put them in single task definition

You should put multiple containers in the same task definition if:

  • Containers share a common lifecycle (that is, they should be launched and terminated together).

  • Containers are required to be run on the same underlying host (that is, one container references the other on a localhost port).

  • You want your containers to share resources.

  • Your containers share data volumes.

Otherwise, you should define your containers in separate tasks definitions so that you can scale, provision, and deprovision them separately.

application_architecture