Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

fs.s3n.awsSecretAccessKey property not being accepted

Solved Go to solution
Highlighted

fs.s3n.awsSecretAccessKey property not being accepted

on: Cloudera Manager 4.8.4

 

I set the fs.s3n.awsSecretAccessKey property in the core-site.xml through the HDFS safety valve, and still can't run the job. Running the job gives me this error:

 

java.lang.IllegalArgumentException: AWS Secret Access Key must be specified as the password of a s3n URL, or by setting the fs.s3n.awsSecretAccessKey property.

 

Meanwhile, I can run the job with the same credentials on a different cluster. Everything I've read says once you add these credentials, deploy the config and restart the cluster, you're good to go.

 

I've also tried adding the property to the hdfs-site.xml and I've compared the jar file on both clusters to make sure they md5sum the same.

 

Anyone run into some issues with their AWS credentials?

 

 

1 ACCEPTED SOLUTION

Accepted Solutions

Re: fs.s3n.awsSecretAccessKey property not being accepted

Found my answer, and wanted to share....

 

The override must go into the MapReduce Gateway client config safety valve.

 

Thanks, everyone.

2 REPLIES 2

Re: fs.s3n.awsSecretAccessKey property not being accepted

Found my answer, and wanted to share....

 

The override must go into the MapReduce Gateway client config safety valve.

 

Thanks, everyone.

Re: fs.s3n.awsSecretAccessKey property not being accepted

... in the mapred-site.xml

Don't have an account?
Coming from Hortonworks? Activate your account here