0
votes

I'm currently getting to grips with the AWS platform as part of a large migration project. We have a number of containerised micro services (Docker) being deployed using ECS/Fargate. I am trying to understand the best way of handling deployment of the same container image to multiple environments whilst still being able to retain environment specific configuration.

For example, I have a development and production environment however each environment naturally has it's own set of configuration values. So far, I have centralised my configuration using AWS Parameter Store which deals with this situation nicely as the application just pulls down it's configuration from the relevant parameter path at startup, however I need some way of making my containers context aware - that is, a way to be able to tell them whether they are driving the development or production environment. My preference is to inject an environment variable into the container when ECS starts a new task, however the only way to do this is inside the task definition where the container is configured, which would mean having two separate task definitions, one for development and another for production, just for the sake of being able to pass a single parameter through to the container at startup.

I hope I've explained the problem I'm trying to overcome in enough detail. Does anyone have any information on this they can share as I'm struggling to find anything online? It seems overkill to have to duplicate task definitions just for this purpose. Happy to be told I'm just using ECS 'wrong' given it's my first time using it.

Our deployment pipeline runs the following command once a new container has been pushed to ECR:

aws ecs update-service --cluster development --service my-service --force-new-deployment

I have seen that you can override options when using the aws ecs run-task cli command, however this appears to bypass any service definitions which I don't want to do as I'll lose the auto scaling capabilities which are important to this project.

Thanks in advance

1
Are you managing your infrastructure as code i.e are you using Cloudformation, Terraform etc..GreenyMcDuff
I'm getting there with Cloudformation - the plan is to use IaC as much as possible where there's a cause for it. So far I've got a template that will create the ECS service defintion (not task definition) and setup the API Gateway appropriately.pr.lwd

1 Answers

1
votes

The first thing I would do is make sure your Task Definition is managed by CloudFormation.

Once this is done it should be trivial to manage the different variables on a per environment basis using Mappings and the Fn::FindInMap function

I've ripped the below from this blog. If I've understood your problem correctly, I believe it illustrates what you're trying to achieve.

You can create an environment parameter to control which environment you're deploying to

Parameters:
  ENV:
    Type: String
  Prefix:
    Type: String

Then you can implement a Map to control the configuration between environments:

Mappings:
  ElasticsearchDomainConfiguration:
    prod:
      InstanceType: m4.10xlarge.elasticsearch   
    dev:
      InstanceType: t2.micro.elasticsearch

Resources:
  ElasticsearchDomain:
    Type: AWS::Elasticsearch::Domain
    Properties:
      DomainName: !Sub ${Prefix}-es-domain
      ElasticsearchClusterConfig:
        InstanceType:
          Fn::FindInMap: [ElasticsearchDomainConfiguration, !Ref ENV, InstanceType]

As a general note, I would stay away from mixing IAC and the CLI. If you're going to use CloudFormation, use it. If there is something it can't handle, then turn to the CLI/custom scripts

HTH