0
votes

I have developed kafka-sink-connector (using confluent-oss-3.2.0-2.11, connect framework) for my data-store (Amppol ADS), which stores data from kafka topics to corresponding tables in my store.

Every thing is working as expected as long as kafka servers and ADS servers are up and running.

Need a help/suggestions about a specific use-case where events are getting ingested in kafka topics and underneath sink component (ADS) is down. Expectation here is Whenever a sink servers comes up, records that were ingested earlier in kafka topics should be inserted into the tables;

Kindly advise how to handle such a case.

Is there any support available in connect framework for this..? or atleast some references will be a great help.

1

1 Answers

0
votes

SinkConnector offsets are maintained in the _consumer_offsets topic on Kafka against your connector name and when SinkConnector restarts it will pick messages from Kafka server from the previous offset it had stored on the _consumer_offsets topic.

So you don't have to worry anything about managing offsets. Its all done by the workers in the Connect framework. In your scenario you go and just restart your sink connector. If the messages are pushed to Kafka by your source connector and are available in the Kafka, sink connector can be started/restarted at any time.