2
votes

I have created two stacks using CloudFormation. The first stack, creates a Cluster with its needed resources:

  • Load Balancer
  • Autoscaling Groups
  • Target Groups
  • Listeners
  • EC2 Instances
  • Task definitions and services
  • Etc...

The second stack is used to create the CodePipeline stack, in order to configure a continuous delivery pipeline.

The flow should be as follows:

  • User pushes code to Github
  • CodePipeline is triggered, executing the following stages:
    • Source Stage: Pulls the code from Github
    • Build Stage: Builds the image and pushes them to ECR
    • Deploy Stage: Triggers a STACK_UPDATE on the first CloudFormation stack described above.
  • ECS Cluster services are updated through a CloudFormation stack update

Everything is working fine, but I am having a problem. I am updating the code and the application seems to be running with the old code, I mean, the pipeline runs green but the code is not updated whenever I access the Load balancer URL. I imagine that this is because the CloudFormation scripts didn't change, but my code did change!

Is there anything to do in order to force the CloudFormation stack to update? Or should I deploy to ECS Cluster instead of CloudFormation in the deploy stage ?

2

2 Answers

0
votes

As far as I can tell, you are missing a deploy application stage. You seem to be updating the infrastructure only and not deploying the application code in the deploy stage?

How I am used to set it up:

  • The codepipeline & infrastructure templates sit with the code
  • A code push triggers the codepipeline
  • First stage: check out code from codecommit (or github in your case)
  • (optional) codepipeline updates itself to deal with changes to the codepipeline template
  • Build stage: build image and push to ECR
  • Deploy infrastructure stage: triggers a cloudformation stack update for the infrastructure stack
  • Deploy app stage: deploys your application code to the infrastructure (what you seem to be missing)

Deploy app stage for ECS in your cloudformation template would look something like this:

...
Stages:
    - Name: deploy-app
      Actions:
        - Name: Deploy
          ActionTypeId:
            Category: Deploy
            Owner: AWS
            Provider: ECS
            Version: 1
          InputArtifacts:
            - Name: build-output-artifact
          Configuration:
            ClusterName: 'my-cluster'
            ServiceName: 'my-service'
            FileName: 'imagedefinitions.json' (optional)

In your build stage, after you have pushed your image to ECR, you write the ECR url to your ECS imagedefinitions.json. The imagedefinitions.json must be the output artefact of the build stage and the input artefact of the deploy-app stage.

More info:

0
votes

It seems that I have a similar configuration to yours.

Deploy step in my configuration deploys cloudformation nested stack.

Some details about Deploy step:

- Name: Deploy
  Actions:
    - Name: Deploy
      ActionTypeId:
        Category: Deploy
        Owner: AWS
        Version: 1
        Provider: CloudFormation
      Configuration:
        ChangeSetName: Deploy
        ActionMode: CREATE_UPDATE
        StackName: !Sub ${AWS::StackName}-nested
        Capabilities: CAPABILITY_NAMED_IAM
        TemplatePath: Architecture-Template::service-ec2.yaml
        RoleArn: !GetAtt CloudFormationExecutionRole.Arn
        ParameterOverrides: !Sub |
          {
            "ImageURI" : { "Fn::GetParam" : [ "BuildOutput", "imageDetail.json", "ImageURI" ] },
            "ApplicationRepoName": "${ApplicationRepoName}",
            "VpcId": "${VpcId}",
            "Cluster": "${Cluster}",
            "ListenerArn": "${ListenerArn}",
            "ServiceAssignPublicIP": "${ServiceAssignPublicIP}",
            "ServiceDesiredCount": "${ServiceDesiredCount}",
            "ServiceLoadBalancerPath": "${ServiceLoadBalancerPath}",
            "ServiceSecurityGroups": "${ServiceSecurityGroups}",
            "ServiceSubnets": "${ServiceSubnets}",
            "TaskHostPort": "${TaskHostPort}",
            "TaskContainerPort": "${TaskContainerPort}",
            "TaskCpu": "${TaskCpu}",
            "TaskMemory": "${TaskMemory}",
            "TaskExecutionRoleArn": "${TaskExecutionRoleArn}",
            "LoadBalancerPriority": "${LoadBalancerPriority}",
            "TargetGroupHealthCheckPath": "${TargetGroupHealthCheckPath}",
            "TargetGroupPort": "${TargetGroupPort}",
            "TargetGroupHealthCheckPort": "${TargetGroupHealthCheckPort}",
            "TagMaintainer": "${TagMaintainer}",
            "TagEnvironment": "${TagEnvironment}",
            "TagApi": "${TagApi}"
          }
      InputArtifacts:
        - Name: Architecture-Template
        - Name: BuildOutput
      RunOrder: 1

As you can see I pass to the nested stack a set of parameters, including the value from the

imageDetail.json

file, which contents is updated during the build step according to the instructions in buildspec.yml configuration file. Inside this file I define an image tag as:

  • COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
  • echo $COMMIT_HASH
  • IMAGE_TAG=${COMMIT_HASH:=latest}

Then I use this information to create an ImageURI and update imageDetail.json configuration file

  • printf '{"ImageURI":"%s:%s"}' $REPOSITORY_URI $IMAGE_TAG > imageDetail.json

This means that each new build should change this information, that in turn is passed to the nested stack, which should be updated because image definition is changed.

In my case the problem was related to the fact that I used latest instead of the actual tag, as below (incorrect value, replace latest with actual tag)

  • printf '{"ImageURI":"%s:%s"}' $REPOSITORY_URI latest > imageDetail.json

As soon as I fixed this line in my buildspec.yml the service was updated each time when a new commit was identified by the CodePipeline.

In short:

To update a nested stack you should introduce some change to its configuration, in my case id is related to the image tag in Elastic Container Registry