1
votes

Regarding Confluent Blog

Exactly-once Semantics are Possible: Here’s How Kafka Does it

Exactly once semantics: even if a producer retries sending a message, it leads to the message being delivered exactly once to the end consumer. Exactly-once semantics is the most desirable guarantee, but also a poorly understood one. This is because it requires a cooperation between the messaging system itself and the application producing and consuming the messages. For instance, if after consuming a message successfully you rewind your Kafka consumer to a previous offset, you will receive all the messages from that offset to the latest one, all over again. This shows why the messaging system and the client application must cooperate to make exactly-once semantics happen.

  1. My understanding is that title and message of the above conflict. Am I right or not?

  2. On my last posting it was stated by the KAFKA folks that Confluent takes care of all these things. So, am I to assume that using Kafka Connect with Confluent means I will get Exactly Once behaviour guaranteed, or not?

1
Kafka Connect isn't specific to Confluent... It's up to the Source/Sink Connector to implement offset storage for reading/storing data exactly once... The blog post was not calling out Connect, though, rather the Producer/Consumer, and, by extension Kafka Streams APIOneCricketeer
I am aware on the Connect and Confluent point. It was simply a related question and valid one. But more importantly do I have a point on the question? I think so... @cricket_007thebluephantom
It's not clear which connector you are referring to. The HDFS and S3 Sinks claim to have exactly once delivery, and sinks are easier to configure that than sources because you can track the consumed offsets, as with any Kafka consumer client... For JDBC Source, for example, you really only get Primary Key scans or timestamp tracking; if you use Bulk mode, then you are repeatedly scanning the database, and you get duplicates. Plus see issues.apache.org/jira/browse/KAFKA-6080OneCricketeer
Also youtu.be/CeDivZQvdcs?t=301, and all the other KIPs and whitepapers about it...OneCricketeer
"Here's how Kafka does it ... by adding some client properties which be understood by the messaging system and the client applications. i.e. Kafka isn't only the brokers. The clients must address the issue as well. Spark and Flink, for example, have external offset storage, and can de-dupe messages on their own using some distinct functions.OneCricketeer

1 Answers

1
votes

There is still work to do on Client side. By their (Confluent's) own admission, the claim that Kafka does it, is a little too optimistic.

cricket_007 alludes to and confirms my point of view where exactly-once semantics are concerned.

Some Confluent Connectors do have guarantees as he points out - albeit that was well understood.