0
votes

I am trying to write data on an S3 bucket from my local computer:

spark = SparkSession.builder \
    .appName('application') \
    .config("spark.hadoop.fs.s3a.access.key", configuration.AWS_ACCESS_KEY_ID) \
    .config("spark.hadoop.fs.s3a.secret.key", configuration.AWS_ACCESS_SECRET_KEY) \
    .config("spark.hadoop.fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem") \
    .getOrCreate()

lines = spark.readStream \
    .format('kafka') \
    .option('kafka.bootstrap.servers', kafka_server) \
    .option('subscribe', kafka_topic) \
    .option("startingOffsets", "earliest") \
    .load()

streaming_query = lines.writeStream \
                    .format('parquet') \
                    .outputMode('append') \
                    .option('path', configuration.S3_PATH) \
                    .start()

streaming_query.awaitTermination()

Hadoop version: 3.2.1, Spark version 3.2.1

I have added the dependency jars to pyspark jars:

spark-sql-kafka-0-10_2.12:3.2.1, aws-java-sdk-s3:1.11.375, hadoop-aws:3.2.1,

I get the following error when executing:

py4j.protocol.Py4JJavaError: An error occurred while calling o68.start.
: java.io.IOException: From option fs.s3a.aws.credentials.provider 
java.lang.ClassNotFoundException: Class 
org.apache.hadoop.fs.s3a.auth.IAMInstanceCredentialsProvider not found