Community Articles

Find and share helpful community-sourced technical articles.
Check out our newest addition to the community, the Cloudera Data Analytics (CDA) group hub.
Labels (1)

When running a distcp process from HDFS to AWS S3, credentials are required to authenticate to the S3 bucket. Passing these into the S3A URI would leak secret values into application logs. Storing these secrets in core-site.xml is also not ideal because this means any user with hdfs CLI access can access the S3 bucket to which these AWS credentials are tied.

The Hadoop Credential API can be used to manage access to S3 in a more fine-grained way.

The first step is to create a local JCEKS file in which to store the AWS Access Key and AWS Secret Key values:

hadoop credential create fs.s3a.access.key -provider localjceks://file/path/to/aws.jceks
<enter Access Key value at prompt>
hadoop credential create fs.s3a.secret.key -provider localjceks://file/path/to/aws.jceks
<enter Secret Key value at prompt>

We'll then copy this JCEKS file to HDFS with the appropriate permissions.

hdfs dfs -put /path/to/aws.jceks /user/admin/
hdfs dfs -chown admin:admin /user/admin/aws.jceks
hdfs dfs -chmod 400 /user/admin/aws.jceks

We can then use the credential provider when calling hadoop distcp, as follows:

hadoop distcp /user/admin/file s3a://my-bucket/

Notice that only the admin user can read this credentials file. If other users attempt to run the command above they will receive a permissions error because they can't read aws.jceks.

This also works with hdfs commands, as in the below example.

hdfs dfs -ls s3a://my-bucket
Expert Contributor


Very nice information, we have been having the same scenario and aws keys are exposed to ambari user through which we run the backup (HDFS to AWS S3) using AWS credentials. Now we have changed to Role based which means we dont need to use any credentials. Just we need to make appropriate permissions on AWS end. Just thought of sharing the info.


"hadoop distcp -Dfs.s3a.server-side-encryption-algorithm=AES256 -Dfs.s3a.access.key=${AWS_ACCESS_KEY_ID} -Dfs.s3a.secret.key=${AWS_SECRET_ACCESS_KEY} -update hdfs://$dir/ s3a://${BUCKET_NAME}/CCS/$table_name/$year/$month/ "


" hadoop distcp -Dfs.s3a.server-side-encryption-algorithm=AES256 -update hdfs://$dir/ s3a://${BUCKET_NAME}/CCVR/$table_name/$year/$month/ "


<property> <name>fs.s3a.access.key</name> <description>AWS access key ID. Omit for Role-based authentication.</description> </property> <property> <name>fs.s3a.secret.key</name> <description>AWS secret key. Omit for Role-based authentication.</description> </property>

Thanks @Muthukumar S, can you please provide further details? How does role-based authentication work with an on-premise source outside of AWS?

Expert Contributor


Above one is for AWS instances as we have been using credentials with the command. For on-prem setup I would need to check. One thing I know is when we setup the onprem servers with AWS CLI installation, we can run aws configure command to provide the credentials once and there on we can run the aws s3 commands from the command line to access AWS S3 (provided we have setup things in AWS end like IAM user creation and bucket policy etc). But with hadoop distcp the one you provided is the solution. May be we can check with AWS guys if there is an option with role based from on-prem.

New Contributor

@slachtermanWe have created an instance profile for the node, and not added credentials in core-site.xml. hadoop fs -ls s3a:// works and even selecting few rows from the external table (whose data is in s3) works, but I try to do aggregation function like :

select max(updated_at) from s3_table;

This query fails with the below mentioned error. Could you please help.

Caused by: java.lang.RuntimeException: Cannot find password option fs.s3a.access.key
  at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(
  at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.(
  at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat.getRecordReader(
  at org.apache.tez.mapreduce.lib.MRReaderMapred.setupOldRecordReader(
  at org.apache.tez.mapreduce.lib.MRReaderMapred.setSplit(
  at org.apache.tez.mapreduce.input.MRInput.initFromEventInternal(
  at org.apache.tez.mapreduce.input.MRInput.initFromEvent(
  at org.apache.tez.mapreduce.input.MRInputLegacy.checkAndAwaitRecordReaderInitialization(
  at org.apache.tez.mapreduce.input.MRInputLegacy.init(
  at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.getMRInput(
  at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(
  at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(
  ... 15 more
Caused by: Cannot find password option fs.s3a.access.key
  at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(
  ... 26 more
Caused by: Cannot find password option fs.s3a.access.key
  at org.apache.hadoop.fs.s3a.S3AUtils.lookupPassword(
  at org.apache.hadoop.fs.s3a.S3AUtils.getPassword(
  at org.apache.hadoop.fs.s3a.S3AUtils.getAWSAccessKeys(
  at org.apache.hadoop.fs.s3a.S3AUtils.createAWSCredentialProviderSet(
  at org.apache.hadoop.fs.s3a.S3ClientFactory$DefaultS3ClientFactory.createS3Client(
  at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(
  at org.apache.hadoop.fs.FileSystem.createFileSystem(
  at org.apache.hadoop.fs.FileSystem.access$200(
  at org.apache.hadoop.fs.FileSystem$Cache.getInternal(
  at org.apache.hadoop.fs.FileSystem$Cache.get(
  at org.apache.hadoop.fs.FileSystem.get(
  at org.apache.hadoop.fs.Path.getFileSystem(
  at org.apache.parquet.hadoop.ParquetFileReader.readFooter(
  at org.apache.parquet.hadoop.ParquetFileReader.readFooter(
  ... 27 more
Caused by: Configuration problem with provider path.
  at org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(
  at org.apache.hadoop.conf.Configuration.getPassword(
  at org.apache.hadoop.fs.s3a.S3AUtils.lookupPassword(
  ... 45 more
Caused by: No CredentialProviderFactory for jceks://file/usr/hdp/current/hive-server2-hive2/conf/conf.server/hive-site.jceks in
  at org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(

Hi @Manmeet Kaur, please post this on HCC as a separate question.

Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.
Version history
Last update:
‎09-30-2016 12:03 AM
Updated by:
Top Kudoed Authors