I have a topic, A, with 12 partitions. I have 3 Kafka brokers in a cluster. There are 4 partitions per broker for topic A. I haven't created any replicas as I am not concerned with resiliency.
I have a simple Java Consumer using the kafka-client library. I have mentioned the following in the property
Properties properties = new Properties();
properties.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "kafka-serverA:9092,kafka-serverB:9092,kafka-serverC:9092");
properties.setProperty(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
properties.setProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
properties.setProperty(ConsumerConfig.GROUP_ID_CONFIG, groupID);
properties.setProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
properties.setProperty("max.partition.fetch.bytes", "100000");
There is more code for ConsumerRecord and print the records, which is working fine. I have 12 messages in the topic and I have verified through "kafka-run-class.sh kafka.admin.ConsumerGroupCommand" that there is a message in each partition. The message size is 100000 bytes, exactly equal to the max.partition.fetch.bytes limit.
When I poll, I should see 12 messages come back as a response. However, the response is very erratic. Sometimes I see messages from 4 partitions, indicating that only one broker is responding to the consumer request, or sometimes I see 8. I never got a response from all 12 partitions. Just for testing, I removed the max.partition.fetch.bytes property. I observed the same behavior.
Am I missing anything? It seems the serve1, server2, server3 in the bootstrap config is not picking all 3 brokers when serving the request.
Any help is greatly appreciated. I am running the brokers and the consumer on separate machines and they are adequately sized.
topicA
was created manually in each broker with replication factor as 1 and partitions as 4. Please correct me if am wrong. – Kashyap KN