We are using Spring Cloud Stream for Kafka and looking for Exactly Once Semantics. We have one solution which is working fine as expected 1) Enabling Idempotent & Transaction from producer 2) Using MetaDataStore to check the duplicate message from consumer side with key (offsetId + partitionId + topicName) With the above solution we are not having any message loss & no duplicate processing
But now we found there's one property (producer.sendOffsetsToTransaction) Kafka API which is helping us to fix the duplicate processing from consumer side without any metadatastore logic. Now am not sure how we can do that with spring cloud stream with this property .sendOffsetsToTransaction