1
votes

I have a cluster of Zookeeper (version-3.4.13) containing 3 nodes and a cluster of Kafka (version-2.11-2.1.0) containing 2 nodes. I am using Spring Kafka version-2.1.9.RELEASE to set up Producers/Consumers.

I am running a 2 node cluster of my spring boot application, having consumers of a particular topic. If I stop both the Kafka nodes and then starts only one of them (which possibly do not acts as 'Group Coordinator'), then consumers stops consuming messages from the queue (even after one Kafka node is up and running) until I restart my Spring Boot Application.

Didn't found much resources on this issue. Need to understand the cause of this, where Kafka cluster goes down and then if one of the nodes is started (perhaps should elect the leader automatically).

Below is my consumer configuration:

Map<String, Object> properties = new HashMap<>();
properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaServerUrl + ":" + kafkaServerPort +
            (StringUtils.isEmpty(additionalBrokers)?"":","+additionalBrokers));
properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
properties.put(ConsumerConfig.GROUP_ID_CONFIG, "testGroup");
properties.put(JsonDeserializer.TRUSTED_PACKAGES, "com.test.model");
properties.put(JsonDeserializer.VALUE_DEFAULT_TYPE,"com.test.EMailConfig");

I expect that the consumers should start consuming the messages again when only one of the Kafka cluster nodes is started again.

Did anyone came across such issue? Please post your recommendations.

Below is the screen shot of the output from describe command when 1 broker is stopped: enter image description here

Thanks in advance.

1
Have you configured your topics to have a replication factor of 2? If not, you can only consume if both brokers are up. $ kafka-topics.sh --bootstrap-server localhost:9092 --describe --topic myTopic.Gary Russell
Thanks for your reply Gary, yes I have configured the topics to have a replication factor of 2.ernakulgoyal
Can you provide client logs (in TRACE or DEBUG level)? Have you validated that all partition leaders are elected on the remaining broker when 1 of them is down?Mickael Maison
Consumer Logs majorly reflects (when I shut down one Kafka node, which is possibly the group coordinator/leader): DEBUG o.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=emailConsumerGroup-1] Give up sending metadata request since no node is availableernakulgoyal
Also, when I describe consumer group, it doesn't show client ID for any of the partition (which is displayed when both the Kafka brokers are running). Also, when I describe the Kafka topic from command line it shows the correct ID of broker in ISR & Leader column.ernakulgoyal

1 Answers

0
votes

the issue is not reproducible. At-least to me. This is what i did

  1. started kafka (2.12.24.0) using docker image: https://codeload.github.com/wurstmeister/kafka-docker/zip/master
  2. started bare minimum spring-kafka with the project: https://codeload.github.com/wurstmeister/kafka-docker/zip/master

Then with postman i sent a msg; it got produced to the topic and also got consumed immediately, followed by shutdown of kafka nodes, followed by start of kafka nodes. Now i redid the sending; it got produced and consumed fine.