Support Questions

Find answers, ask questions, and share your expertise

How to make a standalone Spark cluster use HDFS of CHD?

Hi All,

I have Coudera 5.10 Hadoop cluster and I have a general purpose cluster running a standalone Spark under Slurm.

How can the standalone Spark utilize HDFS?

My understanding is that I need to run some service on Hadoop gateway that would export HDFS and to use some URL pointing to that HDFS in the standalone Spark? How is it done exactly?

Thank you,





Hello Igor, 


Thanks for your post, however running a standalone Spark cluster is deprecated as of CDH 5.5.0 :


There is some documentation that shows how this would work on earlier version of CDH:




Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.