1
votes

I am trying to download files from a s3 bucket on the Frankfurt region.

Originally encountered this problem in spark 2.2.1 with hadoop 2.7.5.

I got this message:

com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS Service: Amazon S3, AWS Request ID: F6EB301E99C9BC7A, AWS Error Code: null, AWS Error Message: Bad Request, S3 Extended Request ID:

setting

       sc.hadoopConfiguration.set("fs.s3a.endpoint", "s3.eu-central-1.amazonaws.com")

Didn't change a thing.

running ./hadoop-2.7.5/bin/hadoop fs -ls s3a://frankfurt-bucket-name returns the exact same error.

this is my core-site.xml

<configuration>
<property>
  <name>fs.s3a.endpoint</name>
  <value>s3.eu-central-1.amazonaws.com</value>
</property>
</configuration>

How do can I make hadoop use V4 signature?

1
with hadoop-3.0.1 and aws-sdk 1.11.999 it works out of the box with no need to configure the s3a endpoint - raam86
That 1.10. version of the AWS SDK needs to be told system property to switch to v4 auth..afraid I don't know the name, so you'll be left googling for it yourself. The good news: it can be done! - stevel
I tried setting fs.s3a.signing-algorithm to V4 on aws-hadoop 2.7.5 and aws sdk 1.7.4 but it didn’t help 🤷🏾‍♀️ - raam86
that's the one. - stevel

1 Answers

0
votes

upgrading hadoop and spark versions solved the problem.