Support Questions
Find answers, ask questions, and share your expertise

s3a issue with AWS region eu-central-1

New Contributor

Hi,

we've set up an on-premise Hortonworks/Ambari/HDP-Stack (Centos7) and currently having issues with accessing files on S3 in AWS region 'eu-central-1'. Buckets and EC2 instance(s) are in that region.

Every time we want to access a bucket, we're getting following error message:

$ hadoop fs -ls s3a://my-bucket/
-ls: Fatal internal error
com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS Service: Amazon S3, AWS Request ID: B020E6CBC84C2E53, AWS Error Code: null, AWS Error Message: Bad Request, S3 Extended Request ID: Zj07YgLyiTmCPbmxy6QL+TcCRSNXY1aF+MJpoR0cyM7CEhPc8zpTHSja/IBMTj5STlfTM64xC+Y=
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1031)
at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:994)
at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:228)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2722)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:95)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2756)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2738)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:376)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:235)
at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:218)
at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:297)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:350)

Using aws-cli however works fine (aws s3 ls s3://my-bucket/).

We've already put the configuration property for the AWS endpoint for eu-central within the custom core-site config:

<property>
  <name>fs.s3a.endpoint</name>
  <value>s3.eu-central-1.amazonaws.com</value>
</property>

Is there anything we missed? How can we fix this issue?

We're using Ambari version 2.2.1.1 / HDP 2.4

Thanks in advance Olli

1 REPLY 1

you are trying to talk to frankfurt/EU central? This looks exactly the authentication problem we cover in troubleshooting there —albeit for HDP 2.5

1. Make sure that the bucket in that region

2. The AWS SDK is very fussy about joda-time versions; if there's an out of date joda-time on the classpath; there's some details in: the Hadoop docs. Bear in mind the configuration options there are ahead of anything shipping yet, though the troubleshooting is generally valid everywhere.