3
votes

I'm encountering some problems using Serverless framework, since i accidentally used the same name of a service on another one.

An error occurred: tableX - TableX already exists.

Let's suppose that i have two "serverless.yml" files, both with the same name of service. One of them (let's call it, "test1") have resources (DynamoDB tables), the other hasn't ("test2"). Like the following snippets:

Test1

service: sandbox-core
provider:
  name: aws
  stage: core
  runtime: nodejs6.10
  region: sa-east-1
  memorySize: 128
  timeout: 300

resources:
  Resources:

    table3:
      Type: 'AWS::DynamoDB::Table'
      DeletionPolicy: Retain
      Properties:
        TableName: SandboxTable3
        AttributeDefinitions:
          -
            AttributeName: provider
            AttributeType: S
          -
            AttributeName: appId
            AttributeType: S
        KeySchema:
          -
            AttributeName: provider
            KeyType: HASH
          -
            AttributeName: appId
            KeyType: RANGE

        ProvisionedThroughput:
          ReadCapacityUnits: 1
          WriteCapacityUnits: 1

    table4:
      Type: 'AWS::DynamoDB::Table'
      DeletionPolicy: Retain
      Properties:
        TableName: SandboxTable4
        AttributeDefinitions:
          -
            AttributeName: session
            AttributeType: S
        KeySchema:
          -
            AttributeName: session
            KeyType: HASH

        ProvisionedThroughput:
          ReadCapacityUnits: 5
          WriteCapacityUnits: 1

functions:
  auth:
    handler: handler.auth
    events:
      - http:
          path: auth/{session}/{provider}/{appId}
          method: get
          cors: true

Test2

service: sandbox-core

provider:
  name: aws
  stage: core
  runtime: nodejs6.10
  region: sa-east-1
  memorySize: 128
  timeout: 300

functions:
  createCustomData:
    handler: handler.createCustomData
    events:
      - http:
          path: teste2
          method: post
          cors: true

When i sls deploy the "test1", he creates the tables as i wanted, with DeletionPolicy: Retain, for the tables with very sensible data. Then i sls deploy "test2" that has other functions but doesn't have any resources (DynamoDB tables), he does what is expected: skip the deletion of the tables.

But, when i sls deploy "test1" again, he doesn't recognizes the tables, he starts to "create" existing tables rather than update them, and fails to deploy.

I need the tables that aren't deleted, and need the functions on the service. It looks like the Cloud Formation losted the track of the created tables from the first deploy.

I don't whant to separate the services (one only for the resources) like was said on this github thread. I need the tables that are running, it has a lot of data and it's too much expensive to backup and restore it to another one, a lot of users could be affected.

So, how do i tell to Cloud Formation Stack that i'm updating that table, and not trying to create it? How to keep track of a service on the Cloud Formation Stack? And, how do i prevent to deploy a service with resources without them?

What's the best solution for this case? Hope that my questions are clear enough to understand.

3

3 Answers

7
votes

There is no problem related to test2.

For test1, you are fine to sls deploy many times.

But if you run sls remove, when the dynamodb is set to Retain in serverless.yml, the dynamodb table isn't deleted. So you can't create it again with sls deploy, because the resource with same name is exist. This is the design in aws cloudformation.

You found the open ticket already for a new feature to skip resources. We have to wait for the feature to be developed and merged. I am waiting for the same solution as well. Go there to vote it up!

With current situation, you have to backup the dynamodb, destroy it, and run sls deploy, and restore it if it is really matter.

I normally manage with variable, such as

DeletionPolicy: ${self:custom.${self:custom.stage}.deletion_policy}

in custom for different environments:

custom
  dev:
    deletion_policy: Delete
  prod:
    deletion_policy: Retain
1
votes

Just to clarify the point, despite you have 2 serverless.yml files, as the service name is the same for both (sandbox-core) the deployment of test1 or test2 will affect the same cloud formation template.

This means that when you deploy test2 you are deliberately removing the track of the Dynamo Tables from the template and in a subsequent deploy of test1 Cloud Formation will be unable to create a resource with the same name (as you already deleted from the template)

If you want to avoid data loss, setting the policy to Retain should do the trick but you need to merge both serverless.yml into one. Then DynamoDB tables will never being removed from the template.

What can help you to solve the issue (as the tables are already created with data) is to create a backup of your tables, deploy the joint serverless.yml files as one unique service with the tables included, manually remove the tables from the console, and restore the backups with exact the same name as the ones created by Cloud Formation. This will ensure that your template still has the reference to the ARNs of the tables.

0
votes

The proper way to fix cloudformation + dynamodb retained tables: You can import already existing resources in AWS/cloudformation/stacks/my-stack:

  1. Copy the Template that you have deployed in the stack (Template tab)

  2. run sls package with args (--stage $stage etc) so you can get the generated .serverless/cloudformation_template_update_stack.json for your master version of the project

  3. find the missing resources "already existing" (easy way, filter events with DELETE_SKIPPED) RETAINED DATABASE RESOURCES

  4. Copy the resources from .serverless/cloudformation_template_update_stack.json to the Template found in point 1

  5. Stack Actions >> import resources in to stack

  6. Upload template file / add the table names in the gaps (they are just in the resources themselves)

  7. Validate that the actions to do are just importing the missing tables to the stack and press enter import resource validation actions

  8. See that the Events with the imports import resource events