Support Questions

Find answers, ask questions, and share your expertise

spark 2.1.0 Reading *.gz files from an s3 bucket or dir as a Dataframe or Dataset..

avatar
Expert Contributor

Just wondering if spark supports Reading *.gz files from an s3 bucket or dir as a Dataframe or Dataset.. I think we can read as RDD but its still not working for me. Any help would be appreciated. Thank you.

iam using s3n://.. but spark says invalid input path exception.

val df = spark.sparkContext.textFile("s3n://..../*.gz)

doesnt work for me 😞

I prefer to the s3 dir of .gz files as a DF or Dataset if possible else atleast RDD please. thank you

1 ACCEPTED SOLUTION

avatar

@BigDataRocks

I believe you need to escape the wildcard: val df = spark.sparkContext.textFile("s3n://..../\*.gz).

Additionally, the S3N filesystem client, while widely used, is no longer undergoing active maintenance except for emergency security issues. The S3A filesystem client can read all files created by S3N. Accordingly it should be used wherever possible.

Please see: https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-a... for the s3a classpath dependencies and authentication properties you need to be aware of.

A nice tutorial on this subject can be found here: https://community.hortonworks.com/articles/36339/spark-s3a-filesystem-client-from-hdp-to-access-s3.h...

View solution in original post

3 REPLIES 3

avatar

@BigDataRocks

I believe you need to escape the wildcard: val df = spark.sparkContext.textFile("s3n://..../\*.gz).

Additionally, the S3N filesystem client, while widely used, is no longer undergoing active maintenance except for emergency security issues. The S3A filesystem client can read all files created by S3N. Accordingly it should be used wherever possible.

Please see: https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-a... for the s3a classpath dependencies and authentication properties you need to be aware of.

A nice tutorial on this subject can be found here: https://community.hortonworks.com/articles/36339/spark-s3a-filesystem-client-from-hdp-to-access-s3.h...

avatar

@BigDataRocks

Please let me know if this helped answer your question.

Thanks. Tom

avatar

There's also the documentation here: https://hortonworks.github.io/hdp-aws/s3-spark/