28
votes

I have just upgraded to Kafka 1.0 and zookeeper 3.4.10.At first, it all started fine. Stand - alone producer and consumer worked as expected. After I've ran my code for about 10 minutes, Kafka fails with this error:

[2017-11-07 16:48:01,304] INFO Stopping serving logs in dir C:\Kafka\kafka_2.12-1.0.0\kafka-logs (kafka.log.LogManager)

[2017-11-07 16:48:01,320] FATAL Shutdown broker because all log dirs in C:\Kafka\kafka_2.12-1.0.0\kafka-logs have failed (kafka.log.LogManager)

I have reinstalled and reconfigured Kafka 1.0 again, the same thing happened. If I try to restart, the same error occurs.

Deleting log files helps to start Kafka, but it fails again after the short run.

I have been running 0.10.2 version for a long while, and never encountered anything like this, it was very stable over the long periods of time.

I have tried to find a solution and followed instructions in the documentation.

This is not yet a production environment, it is fairly simple setup, one producer, one consumer reading from one topic.

I am not sure if this could have anything to do with zookeeper.

**Update: ** the issue has been posted at Apache JIRA board The consensus so far seems to be that it is a Windows issue.

11
Windows is not a supported platform for Kafka brokers. Similar issues are reported on Windows (link1, link2). Feel free to file a bug and provide details here - vahid
Version 0.10.2.1 worked just fine on Windows, we are still running an instance on a different server. Thank you for the link. - TeilaRei
I am facing exactly the same problem here. I am using AWS efs file system to store the kafka log files. My error log -> Caused by: java.nio.file.FileSystemException: /var/lib/kafka/data/ksql_transient_8376289768731246768_1513675960541-KSTREAM-REDUCE-STATE-STORE-0000000003-changelog-1.a9edc755278d425e9227bb03eb0cd55f-delete/.nfs937861751206a94a00000fa2: Device or resource busy - chubao
Looks like the only solution at this point when this happens is to delete all temporal files from tmp folder. - David Corral
David, thanks for the comment. Which tmp folder do you refer to? Can you add your path? - TeilaRei

11 Answers

32
votes

Ran into this issue as well, and only clearing the kafka-logs did not work. You'll also have to clear zookeeper.

Steps to resolve:

  1. Make sure to stop zookeeper.
  2. Take a look at your server.properties file and locate the logs directory under the following entry.

    Example:
    log.dirs=/tmp/kafka-logs/
    
  3. Delete the log directory and its contents. Kafka will recreate the directory once it's started again.

  4. Take a look at the zookeeper.properties file and locate the data directory under the following entry.

    Example:
    dataDir=/tmp/zookeeper
    
  5. Delete the data directory and its contents. Zookeeper will recreate the directory once it's started again.

  6. Start zookeeper.

    <KAFKA_HOME>bin/zookeeper-server-start.sh -daemon <KAFKA_HOME>config/zookeeper.properties
    
  7. Start the kakfa broker.

    <KAFKA_HOME>bin/kafka-server-start.sh -daemon <KAFKA_HOME>config/server.properties
    
  8. Verify the broker has started with no issues by looking at the logs/kafkaServer.out log file.

6
votes

I've tried all the solutions like

  • Clearing Kafka Logs and Zookeeper Data (issue reoccurred after creating new topic)
  • Changing log.dirs path from forward slash "/" to backward slash "\" (like log.dirs=C:\kafka_2.12-2.1.1\data\kafka ) folder named C:\kafka_2.12-2.1.1\kafka_2.12-2.1.1datakafka was created and the issue did stop and the issue was resolved.

Finally I found this link, you'll get it if you google kafka log.dirs windows

This is on Dzone you'll get it if you google kafka log.dirs windows

4
votes

Just clean the logs in C:\Kafka\kafka_2.12-1.0.0\kafka-logs and restart kafka

1
votes

Delete logs in zookeeper directory

0
votes

If at all, you are trying to execute in Windows machine, try changing path in windows way for parameter log.dirs (like log.dirs=C:\some_path\some_path_kafLogs) in server.properties in /config folder.

By default, this path will be in unix way (like /unix/path/).

This worked for me in Windows machine.

0
votes

So this seems to be a windows issue.

https://issues.apache.org/jira/browse/KAFKA-6188

The JIRA is resolved, and there is an unmerged patch attached to it.

https://github.com/apache/kafka/pull/6403

so your options are:

  • get it running on windows and build it with the patch
  • run it in a unix style filesystem (linux or mac)
  • perhaps running it on docker in windows is worth a shot
0
votes

What worked for me was deleting both kafka and zookeeper log directories then configuring my log directories path in both kafka and zookeeper server.properties files (can be found in kafka/conf/server.properties) from the usual slash '/' to a backslash '\'

0
votes

The problem is in a concurrent working with log files of kafka. The task is a delaying of external log files changing between all Kafka threads and

Topic configuration can help:

Map<String, String> config = new HashMap<>();
config.put(CLEANUP_POLICY_CONFIG, CLEANUP_POLICY_COMPACT);
config.put(FILE_DELETE_DELAY_MS_CONFIG, "3600000");
config.put(DELETE_RETENTION_MS_CONFIG, "864000000");
config.put(RETENTION_MS_CONFIG, "86400000");
0
votes

Simply delete all the logs from :

C:\tmp\kafka-logs

and restart zookeeper and kafka server.

0
votes

on windows changing to path separators '' resolved the issue, each required an escape. '\' C:\path\logs

-4
votes

reinstall zookeeper can resolve this problem.