When executing the spark application on YARN cluster can I access the local file system (Underlying OS FS).
Though YARN is pointing to HDFS .
Thanks its worked ..I tried this already but forgot to create the file on each node ,now its fine.
And I just got one more question here :
If I run in spark app YARN mode I can set the memory parameter through sparkconfiguration using spark.yarn.driver.memoryOverhead properties ,
Is something similar available for the standalone and local mode ?
Thanks in advance ,