0
votes

From last 10 days i am trying to set Kafka on different machine:

  1. Server32
  2. Server56

Below are the list of task which i have done so far

  • Configured Zookeeper and started on both server with

server.1=Server32_IP:2888:3888

server.2=Server56_IP:2888:3888

  • I also changed server and server-1 properties as below

broker.id=0 port=9092 log.dir=/tmp/kafka0-logs
host.name=Server32
zookeeper.connect=Server32_IP:9092,Server56_IP:9062

& server-1

broker.id=1 port=9062 log.dir=/tmp/kafka1-logs
host.name=Server56
zookeeper.connect=Server32_IP:9092,Server56_IP:9062

Server.property i ran in Server32
Server-1.property i ran in Server56

The Problem is : when i start producer in both the servers and if i try to consume from any one then it is working BUT
When i stop any one server then another one is not able to send the details

Please help me in explaining the process

2
Did you check the Kafka broker log to see if any errors or additional information was logged at the time? What was the replication factor setting for the topic created?wFateem
Check the logs of your Kafka brokers to be able to have more information on your problemImbaBalboa
@wFateem: factor is 1Rahul
@Imba: There is no logs...but it is not receiving anything thenRahul
When you put down a broker, how is described the topic : bin/kafka-topics.sh --describe --zookeeper HOST:2181 --topic TOPICImbaBalboa

2 Answers

0
votes

Running 2 zookeepers is not fault tolerant. If one of the zookeepers is stopped, then the system will not work. Unlike Kafka brokers, zookeeper needs a quorum (or majority) of the configured nodes in order to work. This is why zookeeper is typically deployed with an odd number of instances (nodes). Since 1 of 2 nodes is not a majority it really is no better than running a single zookeeper. You need at least 3 zookeepers to tolerate a failure because 2 of 3 is a majority so the system will stay up.

Kafka is different so you can have any number of Kafka brokers and if they are configured correctly and you create your topics with a replication factor of 2 or greater, then the Kafka cluster can continue if you take any one of the broker nodes down , even if it's just 1 of 2.

0
votes

There's a lot of information missing here like the Kafka version and whether or not you're using the new consumer APIs or the old APIs. I'm assuming you're probably using a new version of Kafka like 0.10.x along with the new client APIs. In the new version of the client APIs the log data is stored on the Kafka brokers and not Zookeeper as in the older versions. I think your issue here is that you created your topics with a replication factor of 1 and coincidently the Kafka broker server you shutdown was hosting the only replica, so you won't be able to produce or consume messages. You can confirm the health of your topics by running the command:

kafka-topics.sh --zookeeper ZHOST:2181 --describe 

You might want to increase the replication factor to 2. That way you might be able to get away with one broker failing. Ideally you would have 3 or more Kafka Broker servers with a replication factor of 2 or higher (obviously not more than the number of brokers in your cluster). Refer to the link below:

https://kafka.apache.org/documentation/#basic_ops_increase_replication_factor

For a topic with replication factor N, we will tolerate up to N-1 server >failures without losing any records committed to the log."