With Apache Spark version 2.1, I would like to use Kafka (0.10.0.2.5) as source for Structured Streaming with pyspark:
kafka_app.py:
from pyspark.sql import SparkSession
spark=SparkSession.builder.appName("TestKakfa").getOrCreate()
kafka=spark.readStream.format("kafka") \
.option("kafka.bootstrap.servers","localhost:6667") \
.option("subscribe","mytopic").load()
I launched the app in the following way:
./bin/spark-submit kafka_app.py --master local[4] --jars spark-streaming-kafka-0-10-assembly_2.10-2.1.0.jar
After having downloaded the .jar from mvnrepository.com/artifact/org.apache.spark/spark-streaming-kafka-0-10-assembly_2.10/2.1.0
And I get such error :
[...] java.lang.ClassNotFoundException:Failed to find data source: kakfa. [...]
Similarly, I cannot run the Spark example of integration with Kakfa : https://spark.apache.org/docs/2.1.0/structured-streaming-kafka-integration.html
So I wonder where I am wrong or whether Kafka integration with Spark 2.1 using pyspark is actually supported as this page mentioning only Scala and Java as supported language in the version 0.10 makes me doubt : https://spark.apache.org/docs/latest/streaming-kafka-integration.html (But if not supported yet, why an example in Python was published ?)
Thank you in advance for your help !