I have an aggregate that consumes a file and creates a large number of other aggregates as a result.
e.g.
Factory aggregate (event sourced)
Product aggregate (event sourced)
List<Product> Factory.CreateProducts(specifications);
the call to CreateProducts
produces one event : FactoryCreatedProducts
, which will be saved to the event stream once I save the factory aggregate. The call also results in 10000+ Product
aggregates, which, upon instantiation, each will contain one ProductCreated
event that will be saved to event stream.
Currently, I have this coded as below:
var factory eventstream.get(command.FactoryId);
var products = factory.CreateProducts(command.specs);
foreach(var product in products) {eventstream.save(product.PendingEvents);}
eventstream.save(factory.PendingEvents);
To me this approach has some fundamental issues, the biggest of which are:
- multiple aggregates are being modified at once.
- the overhead of carrying 10000+ messages over the message bus and handling them independently in the read model generators.
As an alternative, I could potentially push the entire thing as one event in the message bus, but that would mean a potentially huge event that will grow larger as the number of generated aggregates grow.
Is my approach to this problem common, or have I completely missed the mark on this? What is the proper way to handle thing in a DDD/CQRS architecture where event sourcing is used for persistance of aggregates?
p.s. not relevant to the question but I am using C#, MongoDb for persistence, Windows Service Bus but switching to RabbitMQ.