1
votes

I'm attempting to setup a practical Microservice demo for a few simple internal systems that deal with order management at my company, however I am struggling to understand that data consistency between Microservices when at scale.

I've identified a simple scenario for Microservices - a current application we have in play takes orders as they are processed on our website and updates a customers "Account Credit" - basically the outstanding money they can spend with us before their account needs to be reviewed.

I've attempted to break this VERY simple requirement down into a few Microservices. These are as defined below:

Simple Microservice Structure, showing Customer Microservice and Order Microservice

The API provides various differing levels of functionality - it allows us to create a new Customer, and this triggers the below:

New Customer service process

With SQL, we can do some optimistic queries when working within the database to try to ensure that when two orders are processed at the same time by scaling Microservices (EG: two instances of the Order Microservice, where each Microservice, but not each instance of a Microservice has its own database).

For example, we can do the following and assume SQL will manage locking, meaning that the number should end up at the right number when two orders are processed at the same time:

UPDATE [orderms].[customers] SET CreditLimit = CreditLimit - 100, NoOfOrders = NoOfOrders + 1 WHERE CustomerId = 1

With the above, if the credit was 1000 and 2 orders of 100 are processed, and each order is distributed to a different instance of the "Order" Microservice, we should be able to assume that the correct figures will be present in customers table within the Order Microservice (MSSQL query based locking should take care of this automatically).

The problem then comes when we attempt to integrate these back to the Customer Microservice.. We will have two messages, from each instance of the Order Microservice being passed as an event, example as below:

Process of a new order through the Microservices

Given the above - it's likely we would follow the pattern of updating the "Customer" SQL table as per the following (these are the two queries):

UPDATE [customerms].[customers] SET CreditLimit = 900.00 WHERE CustomerId = 1
UPDATE [customerms].[customers] SET CreditLimit = 800.00 WHERE CustomerId = 1

However - based on the speed in which those "Customer" Microservices are running at, Instance #1 might be creating several new Customers at the moment, and therefore might process this request slower than Instance #2, which means the SQL queries would be executed out of order, and therefore we would be left with the "Order" database having a CreditLimit of 800 (correct) and the Customer Microservice with a CreditLimit of 900 (incorrect).

In a monolithic application, we would normally add an element of locking (or Mutex potentially) if this was really required, otherwise would rely on SQL locking as per the functionality within the Order Microservice, however as this is a distributed process, none of these older methods will apply.

Any advice? I can't seem to see past this somehow?

1

1 Answers

1
votes

Solution in my opinion

Maintain initial credit limit in both the microservices. Say initial credit limit is 1000 and set it in both the Microservices db. When order is processed first reduce it at order microservices and instead of sending the credit limit (800 , 900 or any amount) send the amount that has to be reduced from customer microservices credit limit.

Say you have processed two orders worth 100 each. first reduce credit limit at order microservices and then Generate two events of 100 dollars each that will be consumed by customer service and reduce it at that end. In this way regardless of order in which they arrive , you only have to reduce that amount.

Event Sourcing (Better approach)

Event sourcing with optimistic locking is the approach you can take. It's the pattern where instead of saving a particular state of an entity you save all your events in the database. It's an append only store and events are stored in the order they arrive. You need to replay the events to arrive the particular state in this case credit limit. In this case you will have all the history . Remember in case of any inconsistency you don't have any logs to fall back but with event sourcing you have it.