The High-Level consumer API seems to be reading one message at a time.
This could be quite problematic for the consumers if they want to process and submit those messages to other downstream consumers like Solr or Elastic-Search because they prefer to have messages in bulk rather than one at a time.
It is not trivial to batch those messages in memory too because than the offsets in Kafka will also need to synced only when the batch is already committed otherwise a crashed kafka-consumer with uncommitted downstream messages (as in Solr or ES) will have its offsets updated already and hence loose messages.
The consumer could consume messages more than once if it crashes after committing messages downstream but before updating message offsets.
If Kafka consumes messages in batch, then some pointers to the code/documentation would be much appreciated.
Thanks!