I am pretty new to event sourcing, and we have a domain which we consider applying Event Sourcing on.
We have an app which will be storing domain events to an Oracle DB and the consumers of the events which would use them to generate read models (all read models will be generated in memory), those consumers will mostly use a poll model to fetch the events.
Which means that they will get a request and based on that request they will consume a stream of events and generate their read model then return it to the caller.
So for example
Event Generation API --> Generates events for aggregates of type A and stores them in an Oracle DB.
Consumer 1 --> gets a request for a certain type A aggregate, then it fetches the events and replays them to prepare it's read model.
Consumer 2 --> does exactly the same thing but presents a different read model
Why are we using ES
- We need to provide historical representations of data with each change and the state of the aggregate at that change.
- We need to be able to get a snapshot of an aggregate at any point in time per event basis, for example, when changing a name, then we need the state of the aggregate at that name changed event time
- We need to represent the diff of the state of the aggregate between points in time
But all those requirements need to be done in a poll manner, which means the consumers will request the view at a certain point in time (could be the latest or a previous one)
Question 1
Since both consumer 1 and consumer 2 are going to execute basically the same logic to replay the events, then where should the code for replaying the events be? Will we implement a common library code? Does this mean that we will have duplicate replay code across consumers?
I am worried that when we update an event schema we need to update multiple consumers
Question 2
Is this a good case of event sourcing?