Is there any best practice for Spark to process kafka stream which is serialized in Avro with schema registry? Especially for Spark Structured Streams?
I have found an example at https://github.com/ScalaConsultants/spark-kafka-avro/blob/master/src/main/scala/io/scalac/spark/AvroConsumer.scala . But I have failed to load AvroConverter
class. I cannot find artifact named io.confluent:kafka-avro-serializer
in mvnrepository.com.