1
votes

We are trying to write back the values from a topic to a postgres database using the Confluent JDBC Sink Connector.

connector.class=io.confluent.connect.jdbc.JdbcSinkConnector
connection.password=xxx
tasks.max=1
topics=topic_name
auto.evolve=true
connection.user=confluent_rw
auto.create=true
connection.url=jdbc:postgresql://x.x.x.x:5432/Datawarehouse
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=http://localhost:8081
key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=http://localhost:8081

We can read the value in the console using:

kafka-avro-console-consumer --bootstrap-server localhost:9092 --topic topic_name

The schema exists and the value are correctly deserialize by kafka-avro-console-consumer because it gives no error but the connector gives those errors:

  {
  "name": "datawarehouse_sink",
  "connector": {
    "state": "RUNNING",
    "worker_id": "x.x.x.x:8083"
  },
  "tasks": [
    {
      "id": 0,
      "state": "FAILED",
      "worker_id": "x.x.x.x:8083",
      "trace": "org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:511)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:491)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:322)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:226)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:194)\n\tat org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)\n\tat org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat java.lang.Thread.run(Thread.java:748)\nCaused by: org.apache.kafka.connect.errors.DataException: f_machinestate_sink\n\tat io.confluent.connect.avro.AvroConverter.toConnectData(AvroConverter.java:103)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$0(WorkerSinkTask.java:511)\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)\n\t... 13 more\nCaused by: org.apache.kafka.common.errors.SerializationException: Error deserializing Avro message for id -1\nCaused by: org.apache.kafka.common.errors.SerializationException: Unknown magic byte!\n"
    }
  ],
  "type": "sink"
}

The final error is :

org.apache.kafka.common.errors.SerializationException: Unknown magic byte!

The schema is registered in the schema registry.

Does the problem sit with the configuration file of the connector ?

2

2 Answers

7
votes

The error org.apache.kafka.common.errors.SerializationException: Unknown magic byte! means that a message on the topic was not valid Avro and could not be deserialised. There are several reasons this could be:

  1. Some messages are Avro, but others are not. If this is the case you can use the error handling capabilities in Kafka Connect to ignore the invalid messages using config like this:

    "errors.tolerance": "all",
    "errors.log.enable":true,
    "errors.log.include.messages":true
    
  2. The value is Avro but the key isn't. If this is the case then use the appropriate key.converter.

More reading: https://www.confluent.io/blog/kafka-connect-deep-dive-converters-serialization-explained/

0
votes

This means that deserializer have checked the first 5 bytes of the message and found something unexpected. More about message packing with serializer here , check 'wire format' section. Just a guess that zero byte in message !=0