2
votes

I am trying to send a message from a producer to a kafka node in another DC. Both the producer and consumer are set with the default 0.10.0.0 configuration and the message sizes are not so small (around 500k). Most of the time when sending messages I am encountered with these exceptions:

org.apache.kafka.common.errors.TimeoutException: Batch containing 1 record(s) expired due to timeout while requesting metadata from brokers for topic-0 org.apache.kafka.common.errors.TimeoutException: Failed to allocate memory within the configured max blocking time 60000 ms.

And after that no more messages get transferred (even the callback for remaining messages is not getting called).

2
You should check the server.properties whether you configured: listeners = PLAINTEXT://your.host.name:9092 correct or not.NangSaigon
It is correct and the port is openAmirHossein

2 Answers

4
votes

According to Kafka documentation:

A small batch size will make batching less common and may reduce throughput (a batch size of zero will disable batching entirely). A very large batch size may use memory a bit more wastefully as we will always allocate a buffer of the specified batch size in anticipation of additional records.

Set batch.size = 0, it will resolve the issue.

10
votes

Just wanted to chime in because I received the exact same errors today. I tried increasing the request.timeout.ms, decreasing batch.size, and even setting the batch.size to zero. However, nothing worked.

It turned out it was because the server couldn't connect to one of the 10 Kafka cluster nodes. So, what I saw were some inappropriate exceptions being thrown. By the way, we are using Kafka 0.9.0.1 if it matters.