When executing the spark application on YARN cluster can I access the local file system (Underlying OS FS).
Though YARN is pointing to HDFS .
Yes, you can access the local file. Here is the sample:-
spark-shell --master yarn-client
res0: Long = 40
View solution in original post
Thanks its worked ..I tried this already but forgot to create the file on each node ,now its fine.
And I just got one more question here :
If I run in spark app YARN mode I can set the memory parameter through sparkconfiguration using spark.yarn.driver.memoryOverhead properties ,
Is something similar available for the standalone and local mode ?
Thanks in advance ,
@Param NC , Please close this thread by accepting the answer and consider asking new question.