0
votes

Introduction

We are building an application to process a monthly file, and there are many aws components involved in this project:

  1. Lambda reads the file from S3, parse it and push it to dynamoDB with flag (PENDING) for each record.
  2. Another Lambda will processing these records after the first Lambda is done, and to flag a record as (PROCESSED) after it's done with it.

Problem: We want to send a result to SQS after all records are processed.

Our approach Is to use DynamoDB streaming to trigger a lambda each time a record gets updated, and Lambda to query dynamoDB to check f all records are processed, and to send the notification when that's true.

Questions

  1. Are there any other approach that can achieve this goal without triggering Lambda each time a record gets updated?
  2. Are there a better approach that doesn't include DynamoDB streaming?
1

1 Answers

3
votes
  1. I would recommend Dynamodb Stream as they are reliable enough, triggering lambda for and update is pretty cheap, execution will be 1-100 ms usually. Even if you have millions of executions it is a robust solution. There is a way to a have shared counter of processed messages using elastic cache, once you receive update and counter is 0 you are complete.

Are there any other approach that can achieve this goal without triggering Lambda each time a record gets updated?

  1. Other option is having a scheduled lambda execution to check status of all processed from the db (query for PROCESSED) and move it to SQS. Depending on the load you could define how often to be run. (Trigger using cloudwatch scheduled event)

  2. What about having a table monthly_file_process with row for every month having extra counter. Once the the s3 files is read count the records and persist the total as counter. With every PROCESSED one decrease the counter , if the counter is 0 after the update send the SQS notification. This entire thing with sending to SQS could be done from 2 lambda which processes the record, just extra step checking the counter.