<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Write file to HDFS: limit number of datanodes to be used in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Write-file-to-HDFS-limit-number-of-datanodes-to-be-used/m-p/32020#M7373</link>
    <description>Replications is an HDFS-level configuration. It isn't something you&lt;BR /&gt;configure from Spark, and you don't have to worry about it from Spark.&lt;BR /&gt;AFAIK you set a global replication factor, but can set it per&lt;BR /&gt;directory too. I think you want to pursue this via HDFS.&lt;BR /&gt;&lt;BR /&gt;</description>
    <pubDate>Sat, 19 Sep 2015 13:40:55 GMT</pubDate>
    <dc:creator>srowen</dc:creator>
    <dc:date>2015-09-19T13:40:55Z</dc:date>
    <item>
      <title>Write file to HDFS: limit number of datanodes to be used</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Write-file-to-HDFS-limit-number-of-datanodes-to-be-used/m-p/32019#M7372</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;When writing a file to HDFS, from a Spark application in Scala, I cannot find a way to limit the HDFS resources to be used.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I know I can use an Hadoop confifuration for my Hadoop FileSystem object, that will be used for data manipulation such as deleting a file. Is there a way to say it that, even if I have 3 datanodes and even if each writen file should be distributed to at least 2 partitions, I would like to enforce it to be qplitted and distributed on 3 partitions and datanodes?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I would like to be able to do this programmatically, and not to configure tha Hadoop cluster and restart it... What would impact all Spark applications.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks in advance for your feedback &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 16 Sep 2022 09:41:23 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Write-file-to-HDFS-limit-number-of-datanodes-to-be-used/m-p/32019#M7372</guid>
      <dc:creator>Grg</dc:creator>
      <dc:date>2022-09-16T09:41:23Z</dc:date>
    </item>
    <item>
      <title>Re: Write file to HDFS: limit number of datanodes to be used</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Write-file-to-HDFS-limit-number-of-datanodes-to-be-used/m-p/32020#M7373</link>
      <description>Replications is an HDFS-level configuration. It isn't something you&lt;BR /&gt;configure from Spark, and you don't have to worry about it from Spark.&lt;BR /&gt;AFAIK you set a global replication factor, but can set it per&lt;BR /&gt;directory too. I think you want to pursue this via HDFS.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Sat, 19 Sep 2015 13:40:55 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Write-file-to-HDFS-limit-number-of-datanodes-to-be-used/m-p/32020#M7373</guid>
      <dc:creator>srowen</dc:creator>
      <dc:date>2015-09-19T13:40:55Z</dc:date>
    </item>
  </channel>
</rss>

