13
votes

I've created docker with kafka broker and zookeeper to start it with run script. If I do fresh start it starts normally and runs ok (Windows -> WSL -> two tmux windows, one session). If I shut down kafka or zookeeper and start it again it will connect normally.

Problem occurs when I stop docker container (docker stop my_kafka_container). Then I start with my script ./run_docker. In that script before start I delete old container docker rm my_kafka_containerand then docker run.

Zookeeper starts normally and in file meta.properties it has old cluster id from previous start up, but kafka broker for some reason cannot find by znode cluster/id this id and creates new one which is not that which is stored in meta.properties. And I get

  ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
kafka.common.InconsistentClusterIdException: The Cluster ID m1Ze6AjGRwqarkcxJscgyQ doesn't match stored clusterId Some(1TGYcbFuRXa4Lqojs4B9Hw) in meta.properties. The broker is trying to join the wrong cluster. Configured zookeeper.connect may be wrong.
        at kafka.server.KafkaServer.startup(KafkaServer.scala:220)
        at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44)
        at kafka.Kafka$.main(Kafka.scala:84)
        at kafka.Kafka.main(Kafka.scala)
[2020-01-04 15:58:43,303] INFO shutting down (kafka.server.KafkaServer)

How to avoid broker change it cluster id?

15
Are you using Kafka 2.4.0? I have the same issue and also this seems to be related: serverfault.com/questions/997762/…vanthome
I managed to solve this issue do you steel need an answer ?Dorian
@Dorian in any case it would be helpful to post it. Please, post it.Bohdan Myslyvchuk

15 Answers

21
votes

If you are 100% sure you are connecting to the right ZooKeeper and the right Kafka log directories, but for some reason things don't match and you don't feel like losing all your data while trying to recover:

The Kafka data directory (check config/server.properties for log.dirs property, it defaults to /tmp/kafka-logs) contains a file called meta.properties. It contains the cluster ID. Which should have matched the ID registered to ZK. Either edit the file to match ZK, edit ZK to match the file, or delete the file (it contains the cluster id and the broker id, the first is currently broken and the second is in the config file normally). After this minor surgery, Kafka will start with all your existing data, since you didn't delete any data file.

Like this: mv /tmp/kafka-logs/meta.properties /tmp/kafka-logs/meta.properties_old

11
votes

I had the same issue when using Docker. This issue occurs since Kafka 2.4 because a check was added to see if the Cluster ID in Zookeeper matches. It's storing the cluster id in meta.properties.

This can be fixed by making the Zookeeper data persistent and not only the Zookeeper logs. E.g. with the following config:

volumes:
  - ~/kafka/data/zookeeper_data:/var/lib/zookeeper/data
  - ~/kafka/data/zookeeper_log:/var/lib/zookeeper/log

You should also remove the meta.properties file in the Kafka logs once so that Kafka retrieves the right cluster id from Zookeeper. After that the IDs should match and you don't have to do this anymore.

You may also run into a snapshot.trust.empty error which was also added in 2.4. You can solve this by either adding the snapshot.trust.empty=true setting or by making the Zookeeper data persistent before doing the upgrade to 2.4.

10
votes

I have tried most of the answers and found the hard way (loosing all my data and records) what actually works.
For WINDOWS Operating System Only
So as suggested by others we do need to change and set default path for data directories for both

Kafka in server.properties and
Zookeeper in zookeeper.properties

//Remember this is important if you are on windows give double slash .
for kafka
log.dirs=C://kafka_2.13-2.5//data//kafka

Same goes for zookeeper
dataDir=C://kafka_2.13-2.5//data//zookeeper

and obviously you need to create the above listed folders first before setting anything

then try to run zookeeper and Kafka haven't faced the issue since changing the path.
Prior to this I had single "/" which worked only once then changed to "" again this worked also but once.

EDIT And don't forget to properly kill the process
kafka-server-stop.bat and
zookeeper-server-stop.bat

4
votes

To Solve this issue :

  1. Just Delete all the log/Data file created (or generated) into zookeeper and kafka.
  2. Run Zookeper
  3. Run Kafka

This Worked For me but I am not able to explain why... Give this a try and tell me.

Hope this work for you to =)

4
votes

There is a cluster.id property in meta.properties just replace id with the stated in the error log.
meta.properties file is in kafka.logdir. You can learn kafka.logdir from Kafka config server.properties. An example below.

cat /opt/kafka/config/server.properties | grep log.dirs
Expected output:
log.dirs=/data/kafka-logs

Once you find meta.properties file change it. After change it should look like.

#
#Tue Apr 14 12:06:31 EET 2020
cluster.id=m1Ze6AjGRwqarkcxJscgyQ
version=0
broker.id=0
3
votes

Kafka was started in past with other/other instance of zookeeper, thus old zookeeper id is registered in it. In Kafka config directory, open kafka config properties file lets say server.properties Find the log path directory with parameter log.dirs= then go to log path directory and find the file meta.properties in it. Open the file meta.properties and update the cluster.id= or delete this file or all the log file from log path directory and restart kafka.

1
votes

This is due to a new feature that was introduced in the Kafka 2.4.0 release and it is [KAFKA-7335] - Store clusterId locally to ensure broker joins the right cluster. When the docker restart happens, Kafka tries to match the locally stored clusterId to the Zookeeper's clusterId(which changed because of docker restart) due to this mismatch, the above error is thrown. Please refer to this link for more information.

1
votes

Edit meta.properties and remove line with cluster.id and restart kafka.

On linux servers it is located in /var/lib/kafka/meta.properties

Do this for all servers. New cluster id will be provided by zookeeper for the brokers.

0
votes

Try the following...

  1. Enable following line in ./config/server.properties

    listeners=PLAINTEXT://:9092

  2. Modify default ZooKeeper dataDir

  3. Modify default Kafka log dir

0
votes

For windows, renaming or deleting this meta.properties helped to launch kafka and observed file has been created once launched.

{kafka-installation-folder}\softwareskafkalogs\meta.properties
0
votes

I encountered the same issue while running Kafka server on my Windows Machine.

You can try Following to resolve this issue:

  1. Open server server.properties file which is located in your kafka folder kafka_2.11-2.4.0\config (considering your version of kafka, folder name could be kafka_)
  2. Search for entry log.dirs
  3. If your log.dir path contains windows directory path like this E:\Shyam\Software\kafka_2.11-2.4.0\kafka-logs which has a single backslash i.e \, change it to double back-slash i.e with \

Hope it helps. Cheers

0
votes

Try this:

  • Open server server.properties file which is located in your kafka folder kafka_2.11-2.4.0\config
  • Search for entry log.dirs
  • If you have the directory specified C:....... change it to relative to current directory. Example log.dirs=../../logs

This worked for me :)

0
votes

enter image description here

This is how I solved it. I searched this file, renamed it and started it successfully and a new file was created.

I am Kafka installed by brew under mac

Hope this helps you.

0
votes

If during testing, you are trying to launch an EmbeddedKafka broker, and if your test case doesnt do clean-up of the temp directory, then you will have to manually delete the kafka log directory to get past this error.

0
votes

Error -> The Cluster ID Ltm5IhhbSMypbxp3XZ_onA doesn't match stored clusterId Some(sAPfAIxcRZ2xBew78KDDTg) in meta.properties. The broker is trying to join the wrong cluster. Configured zookeeper.connect may be wrong.

Linux ->

Go to /tmp/kafka-logs Check the meta.properties file

use vi meta.properties Change the cluster id to the required id