Did somebody worked on kafka python with single node multi broker setup?
I was able to produce and consume the data with single node single broker settings but when I have changed that to single node multi broker data was produced and was stored in the topics but when I run the consumer code data was not consumed.
Any suggestions on the above would be appreciated. Thanks in advance!
Note: all the configurations like producer , consumer and server properties were verified and are fine.
Producer code:
from kafka.producer import KafkaProducer
def producer():
data = {'desc' : 'testing', 'data' : 'testing single node multi broker'}
topic = 'INTERNAL'
producer = KafkaProducer(value_serializer=lambda v:json.dumps(v).encode('utf-8'), bootstrap_servers=["localhost:9092","localhost:9093","localhost:9094"])
producer.send(topic, data)
producer.flush()
Consumer code:
from kafka.consumer import KafkaConsumer
def consumer():
topic = 'INTERNAL'
consumer = KafkaConsumer(topic,bootstrap_servers=["localhost:9092","localhost:9093","localhost:9094"])
for data in consumer:
print data
Server 1 config: I have added two more server files like this with same parameters for other brokers with the difference in broker.id
, log.dirs
values.
broker.id=1
port=9092
num.network.threads=3
log.dirs=/tmp/kafka-logs-1
num.partitions=3
num.recovery.threads.per.data.dir=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
log.cleaner.enable=false
zookeeper.connect=localhost:2181
delete.topic.enable=true
Producer config:
metadata.broker.list=localhost:9092,localhost:9093,localhost:9094
Consumer config:
zookeeper.connect=127.0.0.1:2181
zookeeper.connection.timeout.ms=6000
I have added two more server files like this with same parameters for other brokers with the difference in broker.id, log.dirs values
, did you change port? – shizhz