Consider a Spark Structured Streaming job that reads the messages from the Kafka.
In case we have defined multiple topics, how does code manages offset for each topic?
I have been going through KafkaMicroBatchStream class and not able to get how if get's offset for different topics.
The def latestOffset(start: Offset, readLimit: ReadLimit): Offset; method will return only one offset.
Trying to understand the implementation as I need to write my custom source that reads from multiple RDBMs tables and each table would have it's own offset. The offset would be manages in RDBMS table only.