0
votes

I have a Kafka cluster with 3 brokers. Replication is configured:

offsets.topic.replication.factor = 3

Everything works fine until one of the brokers is down. Then Consumer (written in Scala) stops to receive messages and starts to continuously generate the following messages:

2018-05-24 19:59:27 DEBUG Fetcher:425 - Leader for partition SOMETOPIC-1 unavailable for fetching offset, wait for metadata refresh

2018-05-24 19:59:27 DEBUG Fetcher:425 - Leader for partition SOMETOPIC-1 unavailable for fetching offset, wait for metadata refresh

2018-05-24 19:59:27 DEBUG NetworkClient:640 - Sending metadata request {topics=[SOMETOPIC]} to node 0

2018-05-24 19:59:27 DEBUG Metadata:180 - Updated cluster metadata version 5402 to Cluster(nodes = [kafka-1:9092 (id: 0 rack: null)], partitions = [Partition(topic = SOMETOPIC, partition = 0, leader = none, replicas = [1,], isr = [], Partition(topic = SOMETOPIC, partition = 1, leader = none, replicas = [2,], isr = [], Partition(topic = SOMETOPIC, partition = 2, leader = 0, replicas = [0,], isr = [0,]])

But everything works if I use kafka-console-consumer to receive messages. Please help.

1
Are you using re-balancer? To handle the failure situation? I mean how will the consumer will be re-balance the load?Raman Mishra
Am I really seeing logs where for a partition there is no leader?Indraneel Bende
When a broker crashes, yes that would require a metadata refresh for a consumer, as new brokers could become leaders for partitions etc. are there more logs?Indraneel Bende

1 Answers

0
votes

Finally fixed. Although I set offsets.topic.replication.factor = 3, new topics were created automatically with default replication factor 1, so I added the following property which fixed my issue:

default.replication.factor=3