Using Kafka Broker: 1.0.1 spring-kafka: 2.1.6.RELEASE
I'm using a batched consumer with the following settings:
// Other settings are not shown..
props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, "100");
I use spring listener in the following way:
@KafkaListener(topics = "${topics}", groupId = "${consumer.group.id}")
public void receive(final List<String> data,
@Header(KafkaHeaders.RECEIVED_PARTITION_ID) final List<Integer> partitions,
@Header(KafkaHeaders.RECEIVED_TOPIC) Set<String> topics,
@Header(KafkaHeaders.OFFSET) final List<Long> offsets) { // ......code... }
I always find the a few messages remain in the batch and not received in my listener. It appears to be that if the remaining messages are less than a batch size, it isn't consumed (may be in memory and published to my listener). Is there any way to have a setting to auto-flush the batch after a time interval so as to avoid the messages not being flushed? What's the best way to deal with such kind of situation with a batch consumer?
>I always find the a few messages remain in the batch
It's not clear what you mean. Turn on DEBUG logging to see the consumer activity. If it's still not doing what you expect, post the log and explain exactly what you mean. – Gary Russellfetch.max.wait.ms
andfetch.min.bytes
. On the producer side, the messages should go out straight away as long as you haven't increasedlinger.ms
. – Gary Russell