We have kafka cluster with 4 brokers and some topics with replica factor 1 and 10 partitions.
At one moment 2 of 4 our servers with kafka cluster - fail.
So now we have 2 brokers with same topics.
When im run command
./kafka_topics.sh --zookeeper localhost:2181 --describe
i
m get this:
Topic:outcoming-notification-error-topic PartitionCount:10 ReplicationFactor:1 Configs:
Topic: outcoming-error-topic Partition: 0 Leader: 2 Replicas: 2 Isr: 2
Topic: outcoming-error-topic Partition: 1 Leader: 3 Replicas: 3 Isr: 3
Topic: outcoming-error-topic Partition: 2 Leader: 4 Replicas: 4 Isr: 4
Topic: outcoming-error-topic Partition: 3 Leader: 1 Replicas: 1 Isr: 1
Topic: outcoming-error-topic Partition: 4 Leader: 2 Replicas: 2 Isr: 2
Topic: outcoming-error-topic Partition: 5 Leader: 3 Replicas: 3 Isr: 3
Topic: outcoming-error-topic Partition: 6 Leader: 4 Replicas: 4 Isr: 4
Topic: outcoming-error-topic Partition: 7 Leader: 1 Replicas: 1 Isr: 1
Topic: outcoming-error-topic Partition: 8 Leader: 2 Replicas: 2 Isr: 2
Topic: outcoming-error-topic Partition: 9 Leader: 3 Replicas: 3 Isr: 3
How can i delete Leader 2...4 ? or may be i need delete partition for this Leader , but how ?
UPD..
Also we use kafka_exporter for monitoring kafka with prometheus. After 2 brokers down in log of kafka_exporter we get this errors:
level=error msg="Cannot get oldest offset of topic outcoming-error-topic partition 10: kafka server: In the middle of a leadership election, there is currently no leader for this partition and hence it is unavailable for writes." source="kafka_exporter.go:296"