Hi All,
I have Coudera 5.10 Hadoop cluster and I have a general purpose cluster running a standalone Spark under Slurm.
How can the standalone Spark utilize HDFS?
My understanding is that I need to run some service on Hadoop gateway that would export HDFS and to use some URL pointing to that HDFS in the standalone Spark? How is it done exactly?
Thank you,
Igor