2
votes

I am facing an issue with the Kafka clustering setup that I have. I have a Kafka cluster with two broker that are connected to two zookeepers. I am posting data to a topic that have replication factor and partition two each with a spring boot Kafka producer and consuming the same with another spring boot app.

I found one strange behavior when testing the cluster in the following manner -

 Turned off node1 and node 2
 Turned on node 1
 Turned off node 1
 Turned on node 2

After turning on node 2 Kafka cluster got failed and I am not able to produce data to Kafka. My consumer started throwing the message continuously as given below.

[Producer clientId=producer-1] Connection to node 1 (/server1-ip:9092) could not be established. Broker may not be available.

Issue is visible in both nodes. But if I kept both system up for a while issue will get resolved and I can turn off any of the node without breaking the cluster.

My broker configuration is as below.


broker.id=0
listeners=PLAINTEXT://server1-ip:9092
advertised.listeners=PLAINTEXT://serever1-ip:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/home/user/kafka/data/kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=2
offsets.topic.replication.factor=2
transaction.state.log.replication.factor=2
transaction.state.log.min.isr=2
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=serever1-ip:2181,serever2-ip:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=3000
auto.leader.rebalance.enable=true
leader.imbalance.check.interval.seconds=5


Zookeeper configuration


dataDir=/home/user/kafka/data
clientPort=2181
maxClientCnxns=0
initLimit=10
syncLimit=5
tickTime=2000
server.1=server1-ip:2888:3888
server.2=server2-ip:2888:3888

Is this is an expected behavior of Kafka or am I doing something wrong with this configuration ?

Can somebody help me with this issue ..

2
add all the bootstrap server url to the config. listeners=PLAINTEXT://server1-ip:9092,{server2}Prog_G
You should never use an even number of ZookeepersOneCricketeer
So this issue might be solved if I have odd number of zookeepers?chikku
Possibly answered by answers to this question discussing Zookeeper quorum and numbers of In Sync Replicas (ISRs) stackoverflow.com/questions/58761164/…Kevin Hooke
I am using 3 zookeepers now. But even after that, I am able to replicate the issue.chikku

2 Answers

1
votes

You should add all broker addresses to bootstrap.servers properties in both producer and consumer configs. By this way you can connect to Kafka cluster in case of failure of one or more servers.

bootstrap.servers: A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping—this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form host1:port1,host2:port2,.... Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down).

-1
votes

Add all the bootstrap server IPs in your properties file. If anyone of the server is down the Kafka consumer will try to connect to Kafka with an]]]other bootstrap servers. Add server 2 url in the below line:

EDIT:

spring.kafka.bootstrap-servers={SERVER1_HOST},{SERVER2_HOST}