<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: How to remove a large number of nodes from the cluster？ in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/How-to-remove-a-large-number-of-nodes-from-the-cluster/m-p/78874#M82574</link>
    <description>I think, what you need first is this &lt;A href="https://www.cloudera.com/documentation/enterprise/5-10-x/topics/cm_mc_decomm_host.html" target="_blank"&gt;https://www.cloudera.com/documentation/enterprise/5-10-x/topics/cm_mc_decomm_host.html&lt;/A&gt; If decommission is done successfully (all blocks are available across the remaining Datanodes according to the replication factor) then you can delete the nodes.</description>
    <pubDate>Thu, 23 Aug 2018 10:35:48 GMT</pubDate>
    <dc:creator>GeKas</dc:creator>
    <dc:date>2018-08-23T10:35:48Z</dc:date>
    <item>
      <title>How to remove a large number of nodes from the cluster？</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/How-to-remove-a-large-number-of-nodes-from-the-cluster/m-p/78856#M82573</link>
      <description>&lt;P&gt;&lt;SPAN class=""&gt;&lt;SPAN&gt;As we all know, files on Hadoop will be copied 3 times to prevent loss.&lt;/SPAN&gt; &lt;SPAN&gt;Usually there will be two copies on this rack and one on the other racks.&lt;/SPAN&gt; &lt;SPAN&gt;However, when I want to delete a large number of datenode nodes, it is possible to delete the nodes containing 3 copies at the same time.&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;I have read the help documentation and there are two ways to do it.&lt;/SPAN&gt; The link is as follows: &lt;A href="https://www.cloudera.com/documentation/enterprise/5-10-x/topics/cm_mc_delete_hosts.html" target="_blank"&gt;https://www.cloudera.com/documentation/enterprise/5-10-x/topics/cm_mc_delete_hosts.html&lt;/A&gt; . &lt;SPAN&gt;However, there is no mention of bulk deletion of datenode nodes.&lt;/SPAN&gt;&lt;BR /&gt;At the same time, I also modified hdfs-site.xml and then refreshed hdfs dfsadmin -refreshNodes, but it has no effect.&lt;BR /&gt;&amp;nbsp; &lt;SPAN&gt;&amp;lt;property&amp;gt;&lt;/SPAN&gt;&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;SPAN&gt;&amp;lt;name&amp;gt;dfs.hosts.exclude&amp;lt;/name&amp;gt;&lt;/SPAN&gt;&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;SPAN&gt;&amp;lt;value&amp;gt;dfshosts.exclude&amp;lt;/value&amp;gt;&lt;/SPAN&gt;&lt;BR /&gt;&amp;nbsp; &lt;SPAN&gt;&amp;lt;/property&amp;gt;&lt;/SPAN&gt;&lt;BR /&gt;Therefore, I would like to ask technical experts, how to operate on the cloudera, can be deleted in batches, while ensuring that data is not lost? Or go directly to the cluster, use a similar configuration file such as hdfs-site.xml or core-site.xml to achieve the purpose?&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 23 Aug 2018 06:00:10 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/How-to-remove-a-large-number-of-nodes-from-the-cluster/m-p/78856#M82573</guid>
      <dc:creator>BTibetanMastiff</dc:creator>
      <dc:date>2018-08-23T06:00:10Z</dc:date>
    </item>
    <item>
      <title>Re: How to remove a large number of nodes from the cluster？</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/How-to-remove-a-large-number-of-nodes-from-the-cluster/m-p/78874#M82574</link>
      <description>I think, what you need first is this &lt;A href="https://www.cloudera.com/documentation/enterprise/5-10-x/topics/cm_mc_decomm_host.html" target="_blank"&gt;https://www.cloudera.com/documentation/enterprise/5-10-x/topics/cm_mc_decomm_host.html&lt;/A&gt; If decommission is done successfully (all blocks are available across the remaining Datanodes according to the replication factor) then you can delete the nodes.</description>
      <pubDate>Thu, 23 Aug 2018 10:35:48 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/How-to-remove-a-large-number-of-nodes-from-the-cluster/m-p/78874#M82574</guid>
      <dc:creator>GeKas</dc:creator>
      <dc:date>2018-08-23T10:35:48Z</dc:date>
    </item>
    <item>
      <title>Re: How to remove a large number of nodes from the cluster？</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/How-to-remove-a-large-number-of-nodes-from-the-cluster/m-p/79220#M82576</link>
      <description>After my research, remove the cluster node to operate up to two at a time, otherwise the data is at risk of being lost. And if the number of copies is insufficient, the system will not complete the removal operation, and finally have to retrieve the assigned role again.</description>
      <pubDate>Thu, 30 Aug 2018 09:00:30 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/How-to-remove-a-large-number-of-nodes-from-the-cluster/m-p/79220#M82576</guid>
      <dc:creator>BTibetanMastiff</dc:creator>
      <dc:date>2018-08-30T09:00:30Z</dc:date>
    </item>
  </channel>
</rss>

