I started to read about the Event-Sourcing pattern combined with CQRS. As far as I understand, the CQRS pattern is a pattern in which we separate the write and the read actions. Event-sourcing is a pattern where everything in the system is initiated by a command that triggers an event. Event-sourcing pattern requires an event bus. There are couple of things that I didn't manage to understand.
The Event store contains all the events that happened to a certain entity. If I want to query the current state of this entity, I need to query all the events that happened to this entity, and recreate its current state.
All the events history is present in the event store.
Why can't I have a microservice that is responsible for saving each event to a event-database (if I want to log those events for further actions. something like Kafka) and a separate microservice that updates the changes on an entity in a regular database (simple update to entity's document in MongoDB for example). When those microservices finish their work, this event will be removed from the event-store (let's say I implement this event store using a queue).
In this way, whenever I need to query the current state of an entity, I simply query a database instead of querying the event-store and rebuilding the current state(or recalculating the state based on the event-store and caching the result periodically). I don't understand why it is mandatory to store all the events forever, why isn't it optional?
For example, a Lambda function that receives an event generates events and stores them in separate SQS for each event type. Each SQS has its own lambda function responsible for handling the corresponding event type. The event is removed once it is processed.