2
votes

We are building a system which can sell our api services to multiple companies. We have

  • companies (companies which purchased our api)
  • accounts (each company can have multiple accounts, and each account has it's user types)
  • users (users within account)

Infrastructurally, it looks something like this:

"company1" : {
    "accounts" : [
       account1 :{"users" : [{user1,user2}], accountType},
       account2 :{"users" : [{user1,user2}], accountType},
]}

One of the business rules states that users can't change accounts after registration. Other rule states that user can change his type, but only within that account type.

From my understanding, my domain model should be called UserAccount, and it should consist of Account, User and UserType entities, where Account would be aggregate root.

class UserAccount{
  int AccountId;
  string AccountName;
  int AccountTypeId;
  List<UserTypes> AvailableUserTypesForThisAccount
  User User
  void SetUserType(userTypeId){
      if(AvailableUserTypesForThisAccount.Contains(userTypeId) == false)
         throw new NotSupportedException();

  }

}

With this aggregate, we can change type of the user, but it can only be type which is available for that account (one of invariants).

When I fetch UserAccount from repository, I would fetch all necessary tables (or entity data objects) and mapped them to aggregate, and returned it as a whole.

Is my understanding and modeling going in to the right direction?

1

1 Answers

5
votes

It's important to understand the design trade-off of aggregates; because aggregates partition your domain model into independent spaces, you gain the ability to modify unrelated parts of the model concurrently. But you lose the ability to enforce business rules that span multiple aggregates at the point of change.

What this means is that you need to have a clear understanding of the business value of those two things. For entities that aren't going to change very often, your business may prefer strict enforcement over concurrent changes; where the data is subject to frequent change, you will probably end up preferring more isolation.

In practice, isolation means evaluating whether or not the business can afford to mitigate the cases where "conflicting" edits leave the model in an unsatisfactory state.

With this aggregate, we can change type of the user, but it can only be type which is available for that account (one of invariants).

With an invariant like this, an important question to ask is "what is the business cost of a failure here"?

If User and Account are separate aggregates, then you face the problem that a user is being assigned to a "type" at the same time that an account is dropping support for that type. So what would it cost you to detect (after the change) that a violation of the "invariant" had occurred, and what would it cost to apply a correction?

If Account is relatively stable (as seems likely), then most of those errors can be mitigated by comparing the user type to a cached list of those allowed in the account. This cache can be evaluated when the user is being changed, or in the UI that supports the edit. That will reduce (but not eliminate) the error rate without compromising concurrent edits.

From my understanding, my domain model should be called UserAccount, and it should consist of Account, User and UserType entities, where Account would be aggregate root.

I think you've lost the plot here. The "domain model" isn't really a named thing, it's just a collection of aggregates.

If you wanted an Account aggregates that contain Users and UserTypes, then you would probably model it something like

Account : Aggregate {
    accountId : Id<Account>,
    name : AccountName,
    users : List<User>,
    usertypes : List<UserType>
}

This design implies that all changes to a User need to be accessed via the Account aggregate, and that no User belongs to more than one account, and that no other aggregate can directly reference a user (you need to negotiate directly with the Account aggregate).

Account::SetUserType(UserHint hint, UserType userTypeId){
    if(! usertypes.Contains(userTypeId)) {
        throw new AccountInvariantViolationException();
    }
    User u = findUser(users, hint);
    ...
}

When I fetch UserAccount from repository, I would fetch all necessary tables (or entity data objects) and mapped them to aggregate, and returned it as a whole.

Yes, that's exactly right -- it's another reason that we generally prefer small aggregates loosely coupled, rather than one large aggregate.

what about having only the relationship between Account and User live in the Account aggregate as well as the type of user (as an AccountUser entity) and have the rest of the user information live in a separate User aggregate?

That model could work for some kinds of problems -- in that case, the Account aggregate would probably looks something like

Account : Aggregate {
    accountId : Id<Account>,
    name : AccountName,
    users : Map<Id<User>,UserType>
    usertypes : List<UserType>
}

This design allows you to throw exceptions if somebody tries to remove a UserType from an Account when some User is currently of that type. But it cannot, for example, ensure that the user type described here is actually consistent with the state of the independent User aggregate -- or event be certain that the identified User exists (you'll be relying on detection and mitigation for those cases).

Is that better? worse? It's not really possible to say without a more thorough understanding of the actual problem being addressed (trying to understand from toy problems is really hard).

The principle is to understand which the business invariant that must be maintained at all times (as opposed to those where later reconciliation is acceptable), and then group together all of the state which must be kept consistent to satisfy the invariant.

But what if account can have hundreds or thousands of users? What would be your vision of aggregate?

Assuming the same constraints: that we have some aggregate that is responsible for the allowed range of user types.... if the aggregate got to be too large to manage in a reasonable way, and the constraints imposed by the business cannot be relaxed, then I would probably compromise the "repository" abstraction, and allow the enforcement of the set validation rules to leak into the database itself.

The conceit of DDD, taken from its original OO best practices roots, is that the model is real, and the persistence store is just an environmental detail. But looked at with a practical eye, in a world where the processes have life cycles and there are competing consumers and... it's the persistence store that represents the truth of the business.