I am new to spark so I need some help to understand the processing of Executor in spark. In map-reduce job, mapper write its output file "file.out" under "hadoop/yarn/local/usercache/hdfs/appcache". This file contains the output of a mapper(actual data of file). I need to find the same output file of a executor in case of Spark. I saw other temp data is being cached in the same folder in spark as well but I am not able to locate the file.out file. We are persisting data in both memory and disk.