Options
- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
How do I write Spark job output to NFS Share instead of writing it to HDFS
Labels:
- Labels:
-
Apache Spark
Explorer
Created ‎12-02-2019 03:21 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi All,
I need to write Spark job output file to NFS mount point from spark2 shell. can you please let me know if there is any way to do it by defining absolute path in Spark2 shell.
Thanks,
CS
2 REPLIES 2
Guru
Created ‎12-03-2019 11:12 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Have you tried a path like this -> file:///path/to/mounted/nfs
Master Mentor
Created ‎01-06-2020 11:07 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Can you share your code example? The should be an option to specify mode='overwrite' when saving a DataFrame:
myDataFrame.save(path='"/output/folder/path"', source='parquet', mode='overwrite')
Please revert
