- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Write file to HDFS: limit number of datanodes to be used
- Labels:
-
Apache Hadoop
-
Apache Spark
-
HDFS
Created on ‎09-19-2015 05:26 AM - edited ‎09-16-2022 02:41 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
When writing a file to HDFS, from a Spark application in Scala, I cannot find a way to limit the HDFS resources to be used.
I know I can use an Hadoop confifuration for my Hadoop FileSystem object, that will be used for data manipulation such as deleting a file. Is there a way to say it that, even if I have 3 datanodes and even if each writen file should be distributed to at least 2 partitions, I would like to enforce it to be qplitted and distributed on 3 partitions and datanodes?
I would like to be able to do this programmatically, and not to configure tha Hadoop cluster and restart it... What would impact all Spark applications.
Thanks in advance for your feedback 🙂
Created ‎09-19-2015 06:40 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
configure from Spark, and you don't have to worry about it from Spark.
AFAIK you set a global replication factor, but can set it per
directory too. I think you want to pursue this via HDFS.
Created ‎09-19-2015 06:40 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
configure from Spark, and you don't have to worry about it from Spark.
AFAIK you set a global replication factor, but can set it per
directory too. I think you want to pursue this via HDFS.
