0
votes

I am a Kafka newbie. I have order/market data multicast (over Tibco Rendezvous) and my Kafka producer is listening to it and publishing it on a topic all in 1 partition to a broker (I have a list of brokers and a Zookeeper ensemble of 3 nodes) tolerating Zookeeper and broker failures.

However, persistence in Kafka brokers though necessary won't be sufficient if my producer goes down as I would have lost the messages multicast anyway. My consumer commits offsets after every message as it cannot double process a single message.

I was thinking of having a backup producer publish on a different topic, but how would the consumer know where to start picking off even if Kafka allows leeway to restart the consumer.

Additionally I might not have a unique identifier on the incoming message.

1

1 Answers

0
votes

The Data what ever you have produced from the producer is stored in the Consumer. Which means in Kafka-logs. If you need the backup then a simple solution for this is to put the replica of the Topic which you created.bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 2 --topic topicname