2
votes

I am trying to implement CQRS model in some portion of my application, rest is handled in CRUD style. On write side when I post a command in the command handler I will load aggregate and generate necessary events and store them in EventStore and will publish those events for creating/updating read-models. And my questions are

  1. In command handler can I load CRUD handled entity/model while handling command for generating any events/validating the request?
  2. Here EventStore means it can be in memory event-queue/any sort of DB, right?
  3. Can I post events from related aggregates from command handlers?
  4. While generating read-models can I regenerate aggregate from past events along with current event?
1
@SirRufo Awesome! that really helps. Do you have the same in good resolution? - Pokuri
No I don't (I am not the creator), but you may find this also helpful to read williamverdolini.github.io/Cqrs-es-todos.html - Sir Rufo

1 Answers

3
votes

Answering the title question in a single word: No.

Your read model should only depend on the events generated by your aggregates. You may have more than one read model: One for the ui, another for reports, then one for logging, and yet another for statistics, just to name a few.

You don't want any of your read models to break whenever you change the write model. You also don't want your write model to be constrained by any read model.

Having the events as the only dependency make a nice separation.

So, for the other questions:

  • A command handler will load one aggregate from a repository, update it, then save it to the repository.
  • The command handler does not generate events, the aggregate does.
  • Request validation is usually done before sending the command to the handler, but if the handler is your first step, you must do your validation there.
  • An event store just store the events so you can retrieve them later, for one aggregate and in correct order. How and where you store the events is up to you.
  • Related aggregates are usually handled in sagas / process managers. One reason for this is that updating multiple aggregates at once get messy pretty fast.
  • Read models are generated after the fact by listening to the event stream. How and when you do the listening is up to you. You can do it in-process by listening to an event dispatcher, or out-of-process by reading all events after a certain checkpoint from a persistent data store.
  • An aggregate is regenerated every time you retrieve it from the repository. The repository's job is to read all events for the aggregate and apply them.