0
votes

I have written Java Program which is making use of Kafka libraries, I heard Kafka Producer is having internal buffer to hold the message, so that it can retry it later. So i created Idempotent Kafka Producer with the retry properties.

    props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, System.getenv(KafkaConstants.KAFKA_URL));
    props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
    props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
            "org.apache.kafka.common.serialization.StringSerializer");
    props.put("linger.ms", 1000);
    props.put("acks", "all");
    props.put("request.timeout.ms",60000);
    props.put("retries",3);
    props.put("retry.backoff.ms",1000);
    props.put("max.in.flight.requests.per.connection",1);
    props.put("enable.idempotence",true);

Before Running the Program, I am keeping Kafka Server (only one broker) down. When i ran the program i am getting an exception "Failed to update metadata after 60000ms". But When i restart the Kafka Server then it should push the data to kafka topics as i have given retry properties.

Please help in this regard.

Thanks, Priyam Saluja

2

2 Answers

0
votes

One of the first requests a Kafka client sends is about getting metadata. Remember that the client tries to connect to the brokers in the bootstrap servers list but the topic to which it could want to send could be not one of them. For example, consider to have 3 brokers B01, B02, B03 and the bootstrap servers is just B01 but producer wants to send messages to a topic partition with B02 as leader : the first metadata requests is needed by the producer to get this information and then opening a connection to the B02 for sending messages. I guess that the retry mechanism come into play after this step because the batching inside the producer leverages on known partitions and where they are located. You should check if the retry work shutting down the server after the getting metadata step is done properly and the producer knows who the partition leader is.

0
votes

I found out the problem,Every time when Kafka Producer tries to produce a message,first it goes to update the metadata(to check the leader and partitions in Kafka Cluster). If it is not able to get the information then it'll throw the error saying "Failed to update metadata after 60000 ms".

The Second part was retry, Kafka Producer will try if messages failed because of transient errors.