0
votes

I know to write a Kafka consumer and insert/update each record into Oracle database but I want to leverage Kafka Connect API and JDBC Sink Connector for this purpose. Except the property file, in my search I couldn't find a complete executable example with detailed steps to configure and write relevant code in Java to consume a Kafka topic with json message and insert/update (merge) a table in Oracle database using Kafka connect API with JDBC Sink Connector. Can someone point demonstrate an example including configuration and dependencies? Are there any disadvantages with this approach? Do we anticipate any potential issues when table data increases to millions?

Thanks in advance.

1
You don't have to write code. You can just use already existing one and configure it. You can find documentation of jdbc-sink connector here: docs.confluent.io/current/connect/kafka-connect-jdbc/…Bartosz Wardziński

1 Answers

0
votes

There won't be an example for your specific use-case becuase the JDBC connector is meant to be generic.

Here is one configuration example with an Oracle database

All you need is

  1. A topic of some format
  2. key.converter and value.converter to be set to deserialize that topic
  3. Your JDBC string and database schema (tables, projection fields, etc)
  4. Any other JDBC Sink Specific Options

All this goes in a Java properties / JSON file, not Java source code

If you have a specific issue creating this configuration, please comment.

Do we anticipate any potential issues when table data increases to millions?

Well, those issues would be database server related, not with Kafka Connect. For example, disk filling up or increased load while accepting continuous writes.

Are there any disadvantages with this approach?

You'd have to handle de-deduplication or record expiration (e.g. GDPR) separately, if you did want that.