1
votes

I am externalising the kafka consumer metadata for topic in db including consumer groups and number of consumer in group.

Consumer_info table has

Topic name, Consumer group name, Number of consumers in group Consumer class name

At app server startup i am reading table and creating consumers (threads) based on number set in table. If consumer group count is set to 3, i create 3 consumer threads. This is based on number of partitions for a given topic

Now in case i need to scale out horizontally, how do i distribute the consumers belonging to same group across multiple app server nodes. Without reading same message more than once.

The initialization code for consumer which will be called at appserver startup reads metadata from db for consumer and creates all the consumer threads on same instance of app server, even if i add more app server instances, they would all be redundant as the first server which was started has spawned the defined consumer threads equal to the number of partitions.any more consumer created on other instances would be idle.

Can u suggest better approach to scale out consumers horizontally

1

1 Answers

1
votes

consumer groups and number of consumer in group

Adhoc running kafka-consumer-groups --describe would give you more up-to-date information than an external database query, especially given that consumers can rebalance and can fall out of the group at any moment.

how do i distribute the consumers belonging to same group across multiple app server nodes. Without reading same message more than once

This is how Kafka Consumer groups operate, out of the box, assuming you are not manually assigning partitions in your code.

It is not possible to read a message more than once after you have consumed, acked, and committed that offset within the group


I don't see the need for an external database when you can already attempt to expose an API around kafka-consumer-groups command

Or you can use Stream-Messaging-Manager by Cloudera which shows a lot of this information as well