- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Spark Application on YARN .
- Labels:
-
Apache Hadoop
-
Apache Spark
-
Apache YARN
Created 03-15-2017 11:31 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi All,
When executing the spark application on YARN cluster can I access the local file system (Underlying OS FS).
Though YARN is pointing to HDFS .
Thanks ,
Param.
Created 03-15-2017 12:56 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Yes, you can access the local file. Here is the sample:-
spark-shell --master yarn-client scala> sc.textFile("file:///etc/passwd").count() res0: Long = 40
Created 03-15-2017 12:56 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Yes, you can access the local file. Here is the sample:-
spark-shell --master yarn-client scala> sc.textFile("file:///etc/passwd").count() res0: Long = 40
Created 03-15-2017 03:24 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks its worked ..I tried this already but forgot to create the file on each node ,now its fine.
And I just got one more question here :
If I run in spark app YARN mode I can set the memory parameter through sparkconfiguration using spark.yarn.driver.memoryOverhead properties ,
Is something similar available for the standalone and local mode ?
Thanks in advance ,
Param.
Created 03-15-2017 05:35 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Param NC , Please close this thread by accepting the answer and consider asking new question.
