Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

fs.s3n.awsSecretAccessKey property not being accepted

avatar

on: Cloudera Manager 4.8.4

 

I set the fs.s3n.awsSecretAccessKey property in the core-site.xml through the HDFS safety valve, and still can't run the job. Running the job gives me this error:

 

java.lang.IllegalArgumentException: AWS Secret Access Key must be specified as the password of a s3n URL, or by setting the fs.s3n.awsSecretAccessKey property.

 

Meanwhile, I can run the job with the same credentials on a different cluster. Everything I've read says once you add these credentials, deploy the config and restart the cluster, you're good to go.

 

I've also tried adding the property to the hdfs-site.xml and I've compared the jar file on both clusters to make sure they md5sum the same.

 

Anyone run into some issues with their AWS credentials?

 

 

1 ACCEPTED SOLUTION

avatar

Found my answer, and wanted to share....

 

The override must go into the MapReduce Gateway client config safety valve.

 

Thanks, everyone.

View solution in original post

2 REPLIES 2

avatar

Found my answer, and wanted to share....

 

The override must go into the MapReduce Gateway client config safety valve.

 

Thanks, everyone.

avatar

... in the mapred-site.xml