1
votes

Haven't worked with SAGAs and spring-kafka (and spring-cloud-stream-kafka-binder) for a while.

Context: there are several (3+) Spring-boot microservices that have to span business transaction in order to keep data in eventually consistent state. They use Database-per-Service approach (each service stores data in Postgres) and collaborate via Kafka as an Event-Store.

I'm going to apply SAGA (either choreography or orchestration approach, let's stick to the first one) to manage transaction over multiple services.

The question is: how to support local transactions when using RDBMS (Postgres) as a data store along with Kafka as an Event-Store/messaging middleware?

In nowadays, does actually spring-kafka support JTA transactions and would it be enough to wrap RDBMS and Kafka Producer into @Transactional methods? Or do we still have to apply some of Transactional microservices patterns (like Transactional Outbox, Transaction Log Tailing or Polling Publisher)?

Thanks in advance

1

1 Answers

2
votes

Kafka does not support JTA/XA. The best you can do is "Best Effort 1PC" - see Dave Syer's Javaworld article; you have to handle possible duplicates.

Spring for Apache Kafka provides the ChainedKafkaTransactionManager; for consumer-initiated transactions, the CKTM should be injected into the listener container.

The CKTM should have the KTM first, followed by the RDBMS, so the RDBMS transaction will be committed first; if it fails, the Kafka tx will roll back and the record redelivered. If the DB succeeds but Kafka fails, the record will be redelivered (with default configuration).

For producer-only transactions, you can use @Transactional. In that case, the TM can just be the RDBMS TM and Spring-Kafka will synchronize a local Kafka transaction, committing last.

See here for more information.