Hi,
I got a problem when set the different endpoint.
First bucket content was got correctly using the below command:
hadoop fs -D fs.s3a.access.key={AccKey1} -D fs.s3a.secret.key={SecKey1} -D fs.s3a.endpoint=s3.us-west-2.amazonaws.com -ls s3a://{BucketName1}/
The second bucket at another region "us-east-2" always replied error message when I use the below command:
hadoop fs -D fs.s3a.access.key={AccKey2} -D fs.s3a.secret.key={SecKey2} -D fs.s3a.endpoint=s3.us-east-2.amazonaws.com -ls s3a://{BucketName2}/
The error log is
com.cloudera.com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS Service: Amazon S3, AWS Request ID: E4071D35B7EDCBC8, AWS Error Code: null, AWS Error Message: Bad Request, S3 Extended Request ID: hx3VeopeIlTm52qhKLJPlXKqvNJ9mspzz+MTsx5WAgjgbodKhiBPfL7wFwSdWdi4maunkaF4eDQ=
at com.cloudera.com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
at com.cloudera.com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
........................
........................
The access key, secret key and bucket name is no problem because I can access it by S3 browser tool.
My Hadoop version is "Hadoop 2.6.0-cdh5.5.1"
thanks