I've been working on an application based on the java kafka-streams API, whose goal is to process a stream of data coming from one kafka topic, and produce it into another topic.
As it seems, whenever I start producing messages using the kafka-streams application, file handles just keep opening on the kafka brokers I'm using, and they are never closed, meaning eventually the kafka server ends up with too many open files, and the kafka and zookeeper daemons crash.
I'm using kafka-streams-1.0.1
API jar for Java, and running on JDK 11. The kafka cluster is of Kafka version 1.0.0.
My application's configuration includes the following kafka producer configs:
batch.size
: set to 100,000 messages.linger.ms
: set to 1,000 milliseconds.buffer.memory
: set to the byte equivalent of 5 MegaBytes.
The stream processing itself is very simple, and is composed:
stream.map((k,v) -> handle(k,v)).filter((k,v) -> v != null).to(outgoingTopic);
I would appreciate any suggestions you guys might have.