3
votes

Currently we have a system that handles crediting and debiting of credits.

It stores each action as a transaction event in the database, but all in all we still update a "credit_bank" table that represents the current balance of the user.

          Table "public.credit_bank"
     Column     |          Type          | Modifiers 
----------------+------------------------+-----------
 id             | bigint                 | not null
 user_id        | bigint                 | 
 currency       | character varying(255) | 
 amount         | numeric(40,10)         | not null


                 Table "public.transaction"
      Column      |            Type             | Modifiers 
------------------+-----------------------------+-----------
 id               | bigint                      | not null
 amount           | numeric(40,10)              | not null
 credit_bank_id   | bigint                      | 
 status           | character varying(255)      | not null
 transaction_date | timestamp without time zone | 
 type             | character varying(255)      | not null

Since we process lots of transactions at a time, we were forced to lock the credit table every time we update it to avoid stale issues.

Then I came across event sourcing. Is that pattern good to apply in this scenario?

I'm really new to this, so please let me know if I have something wrong.

In my understanding, if we use event sourcing, we would not need to store the state of the "credit_bank", instead use events to come up with the state, or use snapshots. But how would that ensure that the current balance is still sufficient.

Also, if we get the state by processing events everytime, would that be bad for performance?

2

2 Answers

2
votes

Is that pattern good to apply in this scenario?

This sounds like an example I use when trying to describe some corner cases, so I think so.

In my understanding, if we use event sourcing, we would not need to store the state of the "credit_bank", instead use events to come up with the state, or use snapshots. But how would that ensure that the current balance is still sufficient.

Are those transactions come from your business model, or are they things that are reported to you by banks out in the world somewhere. Because if it's the latter, then you need to give some thought to what "current" balance means -- you are being queried now, after a remote bank has dispatched a transaction to you but before that transaction gets recorded in your database. Is the balance still "current"?

With information coming from "somewhere else", we don't usually make raw assumptions about time. So not, "the current balance", but "the balance at time=t". Or "the balance at time=t1, when the most recent available update was at time=t2"

In other words, you starting including temporal modeling, and discussing with the business whether the available latency is "good enough", what the costs are to making that latency shorter, what the costs are to not making the latency shorter, and so on.

if we get the state by processing events everytime, would that be bad for performance?

It might be. Often, event sourcing is coupled with ; which is another sort of latency tradeoff -- the responsibility for updating the model is separated from the responsibility for querying the model, with a bit of plumbing to copy new data from the update model to the query model.

1
votes

Since we process lots of transactions at a time, we were forced to lock the credit table every time we update it to avoid stale issues.

You probably needed to use database transactions (I'm not refering to financial transactions) in order to keep the two tables consistent. That hurts performance and scalability.

Then I came across event sourcing. Is that pattern good to apply in this scenario?

Event sourcing is a perfect fit for append-only systems, so yes.

In my understanding, if we use event sourcing, we would not need to store the state of the "credit_bank", instead use events to come up with the state, or use snapshots. But how would that ensure that the current balance is still sufficient.

Before every withdrawMoneyFromTheAccount command, you replay all the financial transactions on that account and compute the current balance; then you compare the transaction amount to that balance and permit or reject the transaction.

Also, if we get the state by processing events everytime, would that be bad for performance?

For locking, instead of using pessimistic locking (like transactions) you should use optimistic locking using a version column. The real benefit regarding performance is gained because using Event sourcing you need to protect only one table, the event store, and not two that must be kept in sync, like in your current architecture.