Created 03-15-2017 11:31 AM
Hi All,
When executing the spark application on YARN cluster can I access the local file system (Underlying OS FS).
Though YARN is pointing to HDFS .
Thanks ,
Param.
Created 03-15-2017 12:56 PM
Yes, you can access the local file. Here is the sample:-
spark-shell --master yarn-client scala> sc.textFile("file:///etc/passwd").count() res0: Long = 40
Created 03-15-2017 12:56 PM
Yes, you can access the local file. Here is the sample:-
spark-shell --master yarn-client scala> sc.textFile("file:///etc/passwd").count() res0: Long = 40
Created 03-15-2017 03:24 PM
Thanks its worked ..I tried this already but forgot to create the file on each node ,now its fine.
And I just got one more question here :
If I run in spark app YARN mode I can set the memory parameter through sparkconfiguration using spark.yarn.driver.memoryOverhead properties ,
Is something similar available for the standalone and local mode ?
Thanks in advance ,
Param.
Created 03-15-2017 05:35 PM
@Param NC , Please close this thread by accepting the answer and consider asking new question.