0
votes

I have a kafka consumer method annotated @kafkalistener. I have set the retry template on the container and the retry config is such that it would always retry for a couple of exceptions, if occurred while processing the msg. I have set max-poll-records to 1. If this situation occurs in real time and the consumer keeps retrying the msg forever, will the broker think that this consumer is dead and trigger a rebalance? Or, while retrying does the consumer poll for the same msg that was failed to process? If this is true, since poll is happening, there wouldn't be any rebalancing is my assumption. Also I am manually committing offsets, so my enable.auto.commit property is set to false and ack-mode is manual. Can anyone please clarify? Thanks in advance.

1

1 Answers

1
votes

Yes, when using none-stateful retry (the default) at the listener adapter level, it will cause a rebalance when max.poll.interval.ms is exceeded.

You should use Stateful Retry instead.

In that case, the exception is thrown to the container and a SeekToCurrentErrorHandler re-seeks the unprocessed partitions (including the failed record). You still need to ensure the largest backoff time is less than the poll interval. There is no need to set max.poll.records to 1 because seeks are done on all unprocessed partitions.

Starting with version 2.3, you can eliminate retry at the listener level and just use a SeekToCurrentErrorHandler.