0
votes

We are using microservices, cqrs, event store using nodejs cqrs-domain, everything works like a charm and the typical flow goes like:

  1. REST->2. Service->3. Command validation->4. Command->5. aggregate->6. event->7. eventstore(transactional Data)->8. returns aggregate with aggregate ID-> 9. store in microservice local DB(essentially the read DB)-> 10. Publish Event to the Queue

The problem with the flow above is that since the transactional data save i.e. persistence to the event store and storage to the microservice's read data happen in a different transaction context if there is any failure at step 9 how should i handle the event which has already been propagated to the event store and the aggregate which has already been updated?

Any suggestions would be highly appreciated.

3
what exactly is "9. store in microservice local DB(essentially the read DB)"?Constantin Galbenu
it is the read db from where the reads of the microservice happen like GET, GETALL etc.vaibhav
It looks like you are saving your aggregate state in the DB? What is step 8?Roman Eremin

3 Answers

3
votes

The problem with the flow above is that since the transactional data save i.e. persistence to the event store and storage to the microservice's read data happen in a different transaction context if there is any failure at step 9 how should i handle the event which has already been propagated to the event store and the aggregate which has already been updated?

You retry it later.

The "book of record" is the event store. The downstream views (the "published events", the read models) are derived from the book of record. They are typically behind the book of record in time (eventual consistency) and are not typically synchronized with each other.

So you might have, at some point in time, 105 events written to the book of record, but only 100 published to the queue, and a representation in your service database constructed from only 98.

Updating a view is typically done in one of two ways. You can, of course, start with a brand new representation and replay all of the events into it as part of each update. Alternatively, you track in the metadata of the view how far along in the event history you have already gotten, and use that information to determine where the next read of the event history begins.

0
votes

Inside your event store, you could track whether read-side replication was successful. As soon as step 9 suceeds, you can flag the event as 'replicated'.

That way, you could introduce a component watching for unreplicated events and trigger step 9. You could also track whether the replication failed multiple times.

Updating the read-side (step 9) and flagigng an event as replicated should happen consistently. You could use a saga pattern here.

0
votes

I think i have now understood it to a better extent. The Aggregate would still be created, answer is that all the validations for any type of consistency should happen before my aggregate is constructed, it is in case of a failure beyond the purview of the code that a failure exists while updating the read side DB of the microservice which needs to be handled. So in an ideal case aggregate would be created however the event associated would remain as undispatched unless all the read dependencies are updated, if not it remains as undispatched and that can be handled seperately. The Event Store will still have all the event and the eventual consistency this way is maintained as is.