Answering the title question in a single word: No.
Your read model should only depend on the events generated by your aggregates. You may have more than one read model: One for the ui, another for reports, then one for logging, and yet another for statistics, just to name a few.
You don't want any of your read models to break whenever you change the write model. You also don't want your write model to be constrained by any read model.
Having the events as the only dependency make a nice separation.
So, for the other questions:
- A command handler will load one aggregate from a repository, update
it, then save it to the repository.
- The command handler does not generate events, the aggregate does.
- Request validation is usually done before sending the command to the handler, but if the handler is your first step, you must do your validation there.
- An event store just store the events so you can retrieve them later, for one aggregate and in correct order. How and where you store the events is up to you.
- Related aggregates are usually handled in sagas / process managers. One reason for this is that updating multiple aggregates at once get messy pretty fast.
- Read models are generated after the fact by listening to the event stream. How and when you do the listening is up to you. You can do it in-process by listening to an event dispatcher, or out-of-process by reading all events after a certain checkpoint from a persistent data store.
- An aggregate is regenerated every time you retrieve it from the repository. The repository's job is to read all events for the aggregate and apply them.