04-28-2017 10:09 PM
Can a spark job running under yarn write a file not to HDFS (that works fine) but to a shared file system (we use GPFS but I doubt it matters). So far I could not make it work.
The command that fails is:
Notice that /home/me is mounted on all the nodes of the Hadoop cluster.
The error that I am getting is:
Caused by: java.io.IOException: Mkdirs failed to create file:/home/me/z11/_temporary/0/_temporary/attempt_201704290002_0002_m_000000_15 (exists=false, cwd=file:/data/6/yarn/nm/usercache/ivy2/appcache/application_1490816225123_1660/container_e04_1490816225123_1660_01_000002)
The empty directory /home/me/z11/_temporary/0/ was created but that's all.