I am trying to write a Spark data-frame to AWS S3 bucket using Pyspark and getting an exceptions that the encryption method specified is not supported. The bucket has server-side encryption setup.
I'm having the following packages run from spark-default.conf: spark.jars.packages com.amazonaws:aws-java-sdk:1.9.5, org.apache.hadoop:hadoop-aws:3.2.0
Reviewed this existing thread: Doesn't Spark/Hadoop support SSE-KMS encryption on AWS S3 and it mentions that the above version should support SSE-KMS encryption.
I also included the core-site.xml to have the property 'fs.s3a.server-side-encryption-algorithm' set to 'SSE-KMS'
But, I still get the error. Please note that for buckets without the SSE-KMS, this works fine.
Error Message: AmazonS3Exception: Status Code: 400, AWS Service: Amazon S3, AWS Error Code: InvalidArgument, AWS Error Message: The encryption method specified is not supported