- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
How to Export DF data to S3 bucket
- Labels:
-
Apache Spark
Created ‎11-14-2018 05:01 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi All,
I am trying to export the DF data to S3 bucket but i am not able to do. I am getting below error.
WARN FileOutputCommitter: Could not delete s3a://bucketname/Output/CheckResult/_temporary/0/_temporary/attempt_20181114215639_0002_m_000000_0 18/11/14 21:56:40 ERROR FileFormatWriter: Job job_20181114215639_0002 aborted.
I have tried below code for testing.
res.coalesce(1).write.format("csv").save("s3a://bucketname/Output/CheckResult")
I am not sure what is the issue exactly here? I heard that Spark does not really support writes to non-distributed storage.
Kindly help me how to achieve this?
Many thanks.
Created ‎11-15-2018 09:21 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Any help on this request? Please.
Created ‎11-30-2018 04:20 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Sorry, missed this.
the issue here is that "S3" isn't a "real" filesystem, there's no file/directory rename, and instead we have to list every file created and copy it over. Which relies on listings being correct, which S3, being eventually consistent, doesn't always hold up. Looks like you've hit an inconsistency on a job commit
To get consistent listings (HDP 3) enable S3Guard
To avoid the slow rename process and the problems caused by inconistency within a single query, switch to the "S3A Committers" which come with Spark on HDP-3.0. These are specially designed to safely write work into S3
If you can't do either of those, you cannot safely use S3 as a direct destination of work. You should write into HDFS and then, afterwards, copy it to S3.
