Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Error while accessing s3 from spark

avatar

I am trying to access s3 path using spark. I have tried providing hadoop.security.credential.provider.path from hive-site.xml and command line as well.

But both time, I got issue as below

ERROR ApplicationMaster: User class threw exception: java.lang.IllegalStateException: Failed to execute CommandLineRunner
java.lang.IllegalStateException: Failed to execute CommandLineRunner

at org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:779)
at org.springframework.boot.SpringApplication.callRunners(SpringApplication.java:760)
at org.springframework.boot.SpringApplication.afterRefresh(SpringApplication.java:747)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:315)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1162)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1151)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:637)
Caused by: org.apache.hadoop.fs.s3a.AWSClientIOException: doesBucketExist on global-***********-app: com.amazonaws.AmazonClientException: Unable to load AWS credentials from any provider in the chain: Unable to load AWS credentials from any provider in the chain
at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:92)
at org.apache.hadoop.fs.s3a.S3AFileSystem.verifyBucketExists(S3AFileSystem.java:278)
at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:243)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2761)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:99)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2795)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2777)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:386)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:435)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:215)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:198)

2 REPLIES 2

avatar

Hi @pratik vagyani

What's your HDP version?

Have you followed official documentation on the topic? https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.2/bk_cloud-data-access/content/s3-get-started...

avatar

@pratik vagyani What is your HDP version?