Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Write file to HDFS: limit number of datanodes to be used

avatar
Contributor

Hello,

 

When writing a file to HDFS, from a Spark application in Scala, I cannot find a way to limit the HDFS resources to be used.

 

I know I can use an Hadoop confifuration for my Hadoop FileSystem object, that will be used for data manipulation such as deleting a file. Is there a way to say it that, even if I have 3 datanodes and even if each writen file should be distributed to at least 2 partitions, I would like to enforce it to be qplitted and distributed on 3 partitions and datanodes?

 

I would like to be able to do this programmatically, and not to configure tha Hadoop cluster and restart it... What would impact all Spark applications.

 

Thanks in advance for your feedback 🙂

 

1 ACCEPTED SOLUTION

avatar
Master Collaborator
Replications is an HDFS-level configuration. It isn't something you
configure from Spark, and you don't have to worry about it from Spark.
AFAIK you set a global replication factor, but can set it per
directory too. I think you want to pursue this via HDFS.

View solution in original post

1 REPLY 1

avatar
Master Collaborator
Replications is an HDFS-level configuration. It isn't something you
configure from Spark, and you don't have to worry about it from Spark.
AFAIK you set a global replication factor, but can set it per
directory too. I think you want to pursue this via HDFS.