2
votes

We are using AMAZON SQS FIFO queues to handle the appointment booking service for our app. Once a message gets into the queue, it triggers an Amazon Lambda function to manage the booking process. Since it's a FIFO queue, we ensure that if 2 persons request the same slot, the slot will be given to the first requester. My question is: is there a way (a setting in SQS FIFO queues, maybe?) that ensures that one message doesn't trigger the Amazon Lambda function until the previous message has COMPLETED execution. I'm just trying to avoid the need to write additional logic (some kind of a slot locking system) to ensure that the same slot is not being targeted by 2 back-to-back messages prior to the first completing the "booking" process. Thanks.

1
But FIFO queues don't support Lambda function triggers.jarmod
As jarmod says FIFO queues don't support lambda so I'll assume you have a polling lambda function? And you mention "queues" not "queue" - how many queues do you have? Do they all have a specific purpose? Or can any queue carry any message? IOW - the 2 clashing messages could be on different queues to each other? Or will they always be on the same queue?Adam Benson
I think the solution to this particular problem isn't in locking SQS message or equivalent, but to use atomic locking or transactions in your db/business logic.Charlie Schliesser
November 2019: AWS Lambda now supports Lambda triggers. See: AWS Lambda Supports Amazon SQS FIFO (First-In-First-Out) as an Event SourceJohn Rotenstein

1 Answers

2
votes

Here is how I solved the problem.

I would not recommend using SQS at all and you don't need any of those functionalities offered by SQS for your usecase.

Moved from SQS to Kinesis Data Streams and set the batch size to 1 to trigger Lambda. That will take care of it. Kinesis streams is a FIFO. Also Kinesis scales very well compared to transactional FIFO SQS queues.

Producer --> Kinesis Data Streams --> (Lambda Trigger [Batch Size to 1]) Lambda

Considering the error cases,

Processing Error:

If your lambda fails while processing the stream data, your checkpoint will keep retrying with unlimited number of retries. You need to make sure to fix your lambda and get it moving forward. The total time, it will keep retrying is equal to the total time you configured the data to be available on the stream.

Bug in Lamda:

If you have a bug in lambda and you want to go back from the beginning of the stream, you can do so.

Hope it helps.