2
votes

I have a spring boot application(spring version 2.2.2.RELEASE) where I have configured Kafka consumer which processes the data from Kafka and serves to multiple web sockets. The subscription is successful to kafka, but Not all messages from selected Kafka topic is processed by the consumer. Few messages are delayed and few are missed out. But the producer is sending out data which is perfectly ensured. Below I have shared the configuration properties that I have used.

@Bean
public ConsumerFactory<String, String> consumerFactory() {
    final String BOOTSTRAP_SERVERS = kafkaBootstrapServer;
    Map<String, Object> props = new HashMap<>();
    props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, BOOTSTRAP_SERVERS);
    props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false");
    props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
    props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
    props.put(ConsumerConfig.GROUP_ID_CONFIG, consumerGroupId);
    return new DefaultKafkaConsumerFactory<>(props);
}

Is there any configuration I am missing?

1
How do you know the data is reaching the topic are you viewing it in in a UI?AnonymousAlias

1 Answers

3
votes

For a new Consumer (never committed offsets for the group.id) you must set AUTO_OFFSET_RESET to earliest to avoid missing any existing records in the topic (default is latest).