2
votes

I am trying to write data from a topic (json data) into a MySql Database. I believe I want a JDBC Sink Connector.

How do I configure the connector to map the json data in the topic to how to insert data into the database.

The only documentation I can find is this.

"The sink connector requires knowledge of schemas, so you should use a suitable converter e.g. the Avro converter that comes with Schema Registry, or the JSON converter with schemas enabled. Kafka record keys if present can be primitive types or a Connect struct, and the record value must be a Connect struct. Fields being selected from Connect structs must be of primitive types. If the data in the topic is not of a compatible format, implementing a custom Converter may be necessary."

But how do you configure? Any examples?

1
@SRJ, so the key of json field needs to match a column name in the db table?Chris
Yes, as per your schema. Check here for example docs.confluent.io/current/connect/kafka-connect-jdbc/…suraj_fale
@SRJ, so I assume that means you need to use Confluent Schema Registry?Chris

1 Answers

3
votes

I assume that means you need to use Confluent Schema Registry?

For "better" schema support, then yes. But no, it is not required.

You can use the JsonConverter with schemas.enable=true

Your JSON messages will need to look like this though,

{
   "schema" : {
      ... data that describes the payload
   }, 
   "payload": {
      ... your actual data
   }
}

For reference to this format, you can see this blog

You can use Kafka Streams or KSQL to more easily convert "schemaless" JSON to a schema-d Avro payload