1
votes

Kafka noob here! I have two questions:

1) Consider few Kafka consumers running on different Kubernetes pods as part of the same Kafka consumer group. The topics being consumed by these pods are compacted topics. Now, let’s say one of the pods goes down and comes up in a while. Now my question is, will the consumer in question, receive all messages from that compacted Kafka topic ? Or, will it receive only the topics which arrived after it had come back from failure?

2) I understand that Kafka consumers receive topics from partitions starting from a “committed offset” . How will it work in case of “compacted topics”, as Kafka will send events with the latest offset only

1

1 Answers

1
votes

Consumers works the same way for compacted topics as non-compacted ones. During compaction if there are offsets with the same keys then only the latest key stays but the compacted offsets are never deleted it stays pointing to the latest key Eg. if offsets 10,11,12 have the same keys , then only the 12th offset key would be retained after compaction and any consumer fetching offset 10,11 or 12 , it would fetch the same result i.e. key-value stored at offset 12 ( as key-value for 10,11 are deleted)

Coming to your questions -

1.Consumer can fetch from the desired offset , only thing is that if some offsets are compacted ,you will get the latest value for the compacted offsets

2.As explained consumers will continue to fetch from the last committed offset and if the offsets to be fetched has compacted you might get duplicate messages

Refer to the compaction logic in detail in the below kafka link https://kafka.apache.org/documentation.html#design_compactionbasics