<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: HDFS error: could only be replicated to 0 nodes, instead of 1 in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/HDFS-error-could-only-be-replicated-to-0-nodes-instead-of-1/m-p/19582#M52772</link>
    <description>&lt;P&gt;Any suggestion to fix this issue.. &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;&lt;/P&gt;&lt;BLOCKQUOTE&gt;&lt;HR /&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/7712"&gt;@Trinity&lt;/a&gt; wrote:&lt;BR /&gt;&lt;P&gt;Hi Gautam,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks for your quick response, please find my answers below.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;"HDFS Under replicated blocks" implies that some blocks are not duplicated&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;enough to satisfy the default replication factor of 3. If possible consider&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;setting up clusters with at least 3 nodes.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;SPAN style="color: #993300;"&gt;&lt;EM&gt;As of now our requirement donot need to have 3 nodes.&amp;nbsp;&lt;/EM&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;"Missing Blocks" implies the datanodes which had block before shutdown now&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;don't have it when they booted up. This could happen with the Instance&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;Store. What kind of storage did you use on the nodes? This is explained&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;here:&lt;/SPAN&gt;&lt;BR /&gt;&lt;A rel="nofollow" target="_blank" href="http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Storage.html"&gt;http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Storage.html&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp;&lt;SPAN style="color: #993300;"&gt;&lt;EM&gt; &amp;nbsp; We are configured the entire environment in EBS volumes. Our working scenario is,&lt;/EM&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;&lt;SPAN style="color: #993300;"&gt;&lt;EM&gt;Cluster with 2 nodes&lt;/EM&gt;&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;SPAN style="color: #993300;"&gt;&lt;EM&gt;After making changes we need to shutdown the instances (since the application is in the developement stage).&lt;/EM&gt;&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;SPAN style="color: #993300;"&gt;&lt;EM&gt;When we need to perform the developement we will be starting the cluster and perfom the changes.&lt;/EM&gt;&lt;/SPAN&gt;&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;&lt;SPAN style="color: #993300;"&gt;&lt;EM&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; This missing block is showing when we are starting the cluster after shutting down the same for a duration of 2 - 3 days&lt;/EM&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #000000;"&gt;&lt;SPAN&gt;When you run "hadoop fsck -delete" you are telling the namenode to delete&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;files whose blocks cannot be located. This is fine for temporary files.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;Before running it however you should run "hdfs fsck&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;-list-corruptfileblocks", identify the reason why the blocks are missing.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;If the blocks are recoverable, you won't have to delete the files&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;themselves.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #000000;"&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&lt;SPAN style="color: #993300;"&gt;&lt;EM&gt;The Hbase won't be starting without executing&amp;nbsp;"hadoop fsck -delete" command and the&amp;nbsp;"hdfs fsck&amp;nbsp;-list-corruptfileblocks" out shows around 105 missing blocks. The missing block navigation (path to&amp;nbsp;the block) showing the date stamp of the time of shutdown. Is that means we are not allowed to shutdown and start the cluster according to our requirement?&amp;nbsp;&lt;/EM&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #000000;"&gt;&lt;SPAN&gt;&lt;SPAN style="color: #993300;"&gt;&lt;SPAN style="color: #000000;"&gt;&lt;SPAN&gt;"could only be replicated to 0 nodes, instead of 1" could mean the&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;datanodes are not healthy. Check the datanode logs under&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;/var/log/hadoop-hdfs on both nodes to see what the problem might be.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;If it's not clear, paste the relevant parts to pastebin and ​&lt;/SPAN&gt;&lt;SPAN&gt;​give us the URL​&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #000000;"&gt;&lt;SPAN&gt;&lt;SPAN style="color: #993300;"&gt;&lt;SPAN style="color: #000000;"&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&lt;SPAN style="color: #993300;"&gt;&lt;EM&gt;This error is happening after running the command "hadoop fsck -delete" . After this command executaion the Hbase will be starting up and the HDFS will be showing the error&amp;nbsp;"could only be replicated to 0 nodes, instead of 1" &amp;nbsp;&lt;/EM&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&lt;SPAN&gt;&lt;SPAN&gt;&lt;SPAN&gt;&lt;SPAN&gt;&lt;SPAN&gt;Our ultimate goal is&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;&lt;STRONG&gt;&lt;SPAN style="color: #000000;"&gt;&lt;SPAN style="line-height: 13.1999998092651px;"&gt;create a cluster with 2 nodes&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/LI&gt;&lt;LI&gt;&lt;STRONG&gt;&lt;SPAN style="color: #000000;"&gt;&lt;SPAN style="line-height: 13.1999998092651px;"&gt;Shutdown cluster after completing my tasks&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/LI&gt;&lt;LI&gt;&lt;STRONG&gt;&lt;SPAN style="color: #000000;"&gt;&lt;SPAN style="line-height: 13.1999998092651px;"&gt;Start the cluster when ever we need to make changes or demo purpose.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;Please let us know is the above scenario is possible or not in CDH 5.X.X.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks in advance&lt;/P&gt;&lt;P&gt;Akash.&amp;nbsp;&lt;/P&gt;&lt;HR /&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Wed, 01 Oct 2014 12:41:40 GMT</pubDate>
    <dc:creator>Trinity</dc:creator>
    <dc:date>2014-10-01T12:41:40Z</dc:date>
    <item>
      <title>HDFS error: could only be replicated to 0 nodes, instead of 1</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDFS-error-could-only-be-replicated-to-0-nodes-instead-of-1/m-p/19136#M52768</link>
      <description>&lt;P&gt;Hi All,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have installed CDH 5 (5.1.2) on a 2 node cluster on AWS VPC with ubuntu base OS, after completing my use I have shut the servers down.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;When started again the manager was showing errors in starting HDFS and HBASE. The error showed on the HBASE was "&lt;STRONG&gt;HDFS Under replicated blocks&lt;/STRONG&gt;". After having Googling I have found that the issue is with blocks "Missing Blocks / Corrupted Files" was the error shown there.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&lt;SPAN style="text-decoration: underline;"&gt;summary of the&amp;nbsp;hadoop fsck /&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Total size: 311766450 B&lt;BR /&gt;Total dirs: 656&lt;BR /&gt;Total files: 215&lt;BR /&gt;Total symlinks: 0&lt;BR /&gt;Total blocks (validated): 213 (avg. block size 1463692 B)&lt;BR /&gt;********************************&lt;BR /&gt;CORRUPT FILES: 105&lt;BR /&gt;MISSING BLOCKS: 105&lt;BR /&gt;MISSING SIZE: 118118945 B&lt;BR /&gt;CORRUPT BLOCKS: 105&lt;BR /&gt;********************************&lt;BR /&gt;Minimally replicated blocks: 108 (50.704224 %)&lt;BR /&gt;Over-replicated blocks: 0 (0.0 %)&lt;BR /&gt;Under-replicated blocks: 43 (20.187794 %)&lt;BR /&gt;Mis-replicated blocks: 0 (0.0 %)&lt;BR /&gt;Default replication factor: 2&lt;BR /&gt;Average block replication: 1.0140845&lt;BR /&gt;Corrupt blocks: 105&lt;BR /&gt;Missing replicas: 43 (8.431373 %)&lt;BR /&gt;Number of data-nodes: 2&lt;BR /&gt;Number of racks: 1&lt;BR /&gt;FSCK ended at Mon Sep 22 08:06:08 UTC 2014 in 155 milliseconds&lt;/P&gt;&lt;P&gt;----------------------------------------------------------------------------------&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;The filesystem under path '/' is CORRUPT&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have followed the instructions in&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;A href="http://stackoverflow.com/questions/19205057/how-to-fix-corrupt-hadoop-hdfs" target="_blank"&gt;http://stackoverflow.com/questions/19205057/how-to-fix-corrupt-hadoop-hdfs&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&lt;A href="https://www.packtpub.com/books/content/managing-hadoop-cluster" target="_blank"&gt;https://www.packtpub.com/books/content/managing-hadoop-cluster&lt;/A&gt; (hadoop fsck -delete)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;After executing the command (hadoop fsck -delete) the HBase started. While trying to start the HDFS it showing an error "&lt;STRONG&gt;HDFS error: could only be replicated to 0 nodes, instead of 1&lt;/STRONG&gt;"&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Please help me to fix this out.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;My concenrs: Is it possible to shutdown&amp;nbsp;the cluster after the usage?&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;If it is possible which are the configuration which we need to take care during the installation&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 16 Sep 2022 09:08:12 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDFS-error-could-only-be-replicated-to-0-nodes-instead-of-1/m-p/19136#M52768</guid>
      <dc:creator>Trinity</dc:creator>
      <dc:date>2022-09-16T09:08:12Z</dc:date>
    </item>
    <item>
      <title>Re: HDFS error: could only be replicated to 0 nodes, instead of 1</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDFS-error-could-only-be-replicated-to-0-nodes-instead-of-1/m-p/19206#M52769</link>
      <description>"HDFS Under replicated blocks" implies that some blocks are not duplicated&lt;BR /&gt;enough to satisfy the default replication factor of 3. If possible consider&lt;BR /&gt;setting up clusters with at least 3 nodes.&lt;BR /&gt;&lt;BR /&gt;"Missing Blocks" implies the datanodes which had block before shutdown now&lt;BR /&gt;don't have it when they booted up. This could happen with the Instance&lt;BR /&gt;Store. What kind of storage did you use on the nodes? This is explained&lt;BR /&gt;here:&lt;BR /&gt;&lt;A href="http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Storage.html" target="_blank"&gt;http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Storage.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;When you run "hadoop fsck -delete" you are telling the namenode to delete&lt;BR /&gt;files whose blocks cannot be located. This is fine for temporary files.&lt;BR /&gt;Before running it however you should run "hdfs fsck&lt;BR /&gt;-list-corruptfileblocks", identify the reason why the blocks are missing.&lt;BR /&gt;If the blocks are recoverable, you won't have to delete the files&lt;BR /&gt;themselves.&lt;BR /&gt;&lt;BR /&gt;"could only be replicated to 0 nodes, instead of 1" could mean the&lt;BR /&gt;datanodes are not healthy. Check the datanode logs under&lt;BR /&gt;/var/log/hadoop-hdfs on both nodes to see what the problem might be.&lt;BR /&gt;​ If it's not clear, paste the relevant parts to pastebin and ​&lt;BR /&gt;​give us the URL​&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 23 Sep 2014 02:54:43 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDFS-error-could-only-be-replicated-to-0-nodes-instead-of-1/m-p/19206#M52769</guid>
      <dc:creator>GautamG</dc:creator>
      <dc:date>2014-09-23T02:54:43Z</dc:date>
    </item>
    <item>
      <title>Re: HDFS error: could only be replicated to 0 nodes, instead of 1</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDFS-error-could-only-be-replicated-to-0-nodes-instead-of-1/m-p/19274#M52770</link>
      <description>&lt;P&gt;Hey Gautam,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks for the quick response. Please find my responses below.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;"Missing Blocks" implies the datanodes which had block before shutdown now&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;don't have it when they booted up. This could happen with the Instance&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;Store. What kind of storage did you use on the nodes? This is explained&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;here:&lt;/SPAN&gt;&lt;BR /&gt;&lt;A rel="nofollow" target="_blank" href="http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Storage.html"&gt;http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Storage.html&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &lt;EM&gt;&amp;nbsp;&lt;SPAN style="color: #993300;"&gt;I have configured using EBS instead of instance store. We need to shut down the instance after the usage (since the application is not yet expossed to live). My working scenatio is &lt;/SPAN&gt;&lt;/EM&gt;&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;&lt;SPAN style="color: #993300;"&gt;&lt;EM&gt;&lt;SPAN style="line-height: 14px;"&gt;Configured a cluster with 2 nodes.&amp;nbsp;&lt;/SPAN&gt;&lt;/EM&gt;&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;SPAN style="color: #993300;"&gt;&lt;EM&gt;&lt;SPAN style="line-height: 14px;"&gt;Shutdown them after completing my jobs on there.&lt;/SPAN&gt;&lt;/EM&gt;&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;SPAN style="color: #993300;"&gt;&lt;EM&gt;&lt;SPAN style="line-height: 14px;"&gt;Start once I want to do some changes in the cluster.&lt;/SPAN&gt;&lt;/EM&gt;&lt;/SPAN&gt;&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;&lt;SPAN style="color: #993300;"&gt;&lt;EM&gt;In my case, issue is once I start the cluster (instances) after 2 - 3 days (after shutting down) there will be showing the errors of missing blocks. Will shutdown and starting the server according to our use makes any issue in CDH5.x.x?&lt;/EM&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;When you run "hadoop fsck -delete" you are telling the namenode to delete&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;files whose blocks cannot be located. This is fine for temporary files.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;Before running it however you should run "hdfs fsck&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;-list-corruptfileblocks", identify the reason why the blocks are missing.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;If the blocks are recoverable, you won't have to delete the files&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;themselves.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;&lt;SPAN style="color: #993300;"&gt;&lt;EM&gt;Ok, but the HBASE wont comming up with out resolving this missing block issue. Is there any other method to fix this missing block?&amp;nbsp;&lt;/EM&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #000000;"&gt;&lt;SPAN&gt;"could only be replicated to 0 nodes, instead of 1" could mean the&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;datanodes are not healthy. Check the datanode logs under&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;/var/log/hadoop-hdfs on both nodes to see what the problem might be.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;If it's not clear, paste the relevant parts to pastebin and ​&lt;/SPAN&gt;&lt;SPAN&gt;​give us the URL​&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #000000;"&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;SPAN style="color: #993300;"&gt;&lt;EM&gt;This happens after running the&amp;nbsp;"hadoop fsck -delete" command&lt;/EM&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 24 Sep 2014 13:59:31 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDFS-error-could-only-be-replicated-to-0-nodes-instead-of-1/m-p/19274#M52770</guid>
      <dc:creator>Trinity</dc:creator>
      <dc:date>2014-09-24T13:59:31Z</dc:date>
    </item>
    <item>
      <title>Re: HDFS error: could only be replicated to 0 nodes, instead of 1</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDFS-error-could-only-be-replicated-to-0-nodes-instead-of-1/m-p/19276#M52771</link>
      <description>&lt;P&gt;Hi Gautam,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks for your quick response, please find my answers below.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;"HDFS Under replicated blocks" implies that some blocks are not duplicated&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;enough to satisfy the default replication factor of 3. If possible consider&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;setting up clusters with at least 3 nodes.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;SPAN style="color: #993300;"&gt;&lt;EM&gt;As of now our requirement donot need to have 3 nodes.&amp;nbsp;&lt;/EM&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;"Missing Blocks" implies the datanodes which had block before shutdown now&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;don't have it when they booted up. This could happen with the Instance&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;Store. What kind of storage did you use on the nodes? This is explained&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;here:&lt;/SPAN&gt;&lt;BR /&gt;&lt;A rel="nofollow" target="_blank" href="http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Storage.html"&gt;http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Storage.html&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp;&lt;SPAN style="color: #993300;"&gt;&lt;EM&gt; &amp;nbsp; We are configured the entire environment in EBS volumes. Our working scenario is,&lt;/EM&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;&lt;SPAN style="color: #993300;"&gt;&lt;EM&gt;Cluster with 2 nodes&lt;/EM&gt;&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;SPAN style="color: #993300;"&gt;&lt;EM&gt;After making changes we need to shutdown the instances (since the application is in the developement stage).&lt;/EM&gt;&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;SPAN style="color: #993300;"&gt;&lt;EM&gt;When we need to perform the developement we will be starting the cluster and perfom the changes.&lt;/EM&gt;&lt;/SPAN&gt;&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;&lt;SPAN style="color: #993300;"&gt;&lt;EM&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; This missing block is showing when we are starting the cluster after shutting down the same for a duration of 2 - 3 days&lt;/EM&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #000000;"&gt;&lt;SPAN&gt;When you run "hadoop fsck -delete" you are telling the namenode to delete&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;files whose blocks cannot be located. This is fine for temporary files.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;Before running it however you should run "hdfs fsck&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;-list-corruptfileblocks", identify the reason why the blocks are missing.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;If the blocks are recoverable, you won't have to delete the files&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;themselves.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #000000;"&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&lt;SPAN style="color: #993300;"&gt;&lt;EM&gt;The Hbase won't be starting without executing&amp;nbsp;"hadoop fsck -delete" command and the&amp;nbsp;"hdfs fsck&amp;nbsp;-list-corruptfileblocks" out shows around 105 missing blocks. The missing block navigation (path to&amp;nbsp;the block) showing the date stamp of the time of shutdown. Is that means we are not allowed to shutdown and start the cluster according to our requirement?&amp;nbsp;&lt;/EM&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #000000;"&gt;&lt;SPAN&gt;&lt;SPAN style="color: #993300;"&gt;&lt;SPAN style="color: #000000;"&gt;&lt;SPAN&gt;"could only be replicated to 0 nodes, instead of 1" could mean the&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;datanodes are not healthy. Check the datanode logs under&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;/var/log/hadoop-hdfs on both nodes to see what the problem might be.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;If it's not clear, paste the relevant parts to pastebin and ​&lt;/SPAN&gt;&lt;SPAN&gt;​give us the URL​&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #000000;"&gt;&lt;SPAN&gt;&lt;SPAN style="color: #993300;"&gt;&lt;SPAN style="color: #000000;"&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&lt;SPAN style="color: #993300;"&gt;&lt;EM&gt;This error is happening after running the command "hadoop fsck -delete" . After this command executaion the Hbase will be starting up and the HDFS will be showing the error&amp;nbsp;"could only be replicated to 0 nodes, instead of 1" &amp;nbsp;&lt;/EM&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&lt;SPAN&gt;&lt;SPAN&gt;&lt;SPAN&gt;&lt;SPAN&gt;&lt;SPAN&gt;Our ultimate goal is&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;&lt;STRONG&gt;&lt;SPAN style="color: #000000;"&gt;&lt;SPAN style="line-height: 13.1999998092651px;"&gt;create a cluster with 2 nodes&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/LI&gt;&lt;LI&gt;&lt;STRONG&gt;&lt;SPAN style="color: #000000;"&gt;&lt;SPAN style="line-height: 13.1999998092651px;"&gt;Shutdown cluster after completing my tasks&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/LI&gt;&lt;LI&gt;&lt;STRONG&gt;&lt;SPAN style="color: #000000;"&gt;&lt;SPAN style="line-height: 13.1999998092651px;"&gt;Start the cluster when ever we need to make changes or demo purpose.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;Please let us know is the above scenario is possible or not in CDH 5.X.X.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks in advance&lt;/P&gt;&lt;P&gt;Akash.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 24 Sep 2014 14:15:47 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDFS-error-could-only-be-replicated-to-0-nodes-instead-of-1/m-p/19276#M52771</guid>
      <dc:creator>Trinity</dc:creator>
      <dc:date>2014-09-24T14:15:47Z</dc:date>
    </item>
    <item>
      <title>Re: HDFS error: could only be replicated to 0 nodes, instead of 1</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDFS-error-could-only-be-replicated-to-0-nodes-instead-of-1/m-p/19582#M52772</link>
      <description>&lt;P&gt;Any suggestion to fix this issue.. &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;&lt;/P&gt;&lt;BLOCKQUOTE&gt;&lt;HR /&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/7712"&gt;@Trinity&lt;/a&gt; wrote:&lt;BR /&gt;&lt;P&gt;Hi Gautam,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks for your quick response, please find my answers below.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;"HDFS Under replicated blocks" implies that some blocks are not duplicated&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;enough to satisfy the default replication factor of 3. If possible consider&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;setting up clusters with at least 3 nodes.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;SPAN style="color: #993300;"&gt;&lt;EM&gt;As of now our requirement donot need to have 3 nodes.&amp;nbsp;&lt;/EM&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;"Missing Blocks" implies the datanodes which had block before shutdown now&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;don't have it when they booted up. This could happen with the Instance&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;Store. What kind of storage did you use on the nodes? This is explained&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;here:&lt;/SPAN&gt;&lt;BR /&gt;&lt;A rel="nofollow" target="_blank" href="http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Storage.html"&gt;http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Storage.html&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp;&lt;SPAN style="color: #993300;"&gt;&lt;EM&gt; &amp;nbsp; We are configured the entire environment in EBS volumes. Our working scenario is,&lt;/EM&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;&lt;SPAN style="color: #993300;"&gt;&lt;EM&gt;Cluster with 2 nodes&lt;/EM&gt;&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;SPAN style="color: #993300;"&gt;&lt;EM&gt;After making changes we need to shutdown the instances (since the application is in the developement stage).&lt;/EM&gt;&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;SPAN style="color: #993300;"&gt;&lt;EM&gt;When we need to perform the developement we will be starting the cluster and perfom the changes.&lt;/EM&gt;&lt;/SPAN&gt;&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;&lt;SPAN style="color: #993300;"&gt;&lt;EM&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; This missing block is showing when we are starting the cluster after shutting down the same for a duration of 2 - 3 days&lt;/EM&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #000000;"&gt;&lt;SPAN&gt;When you run "hadoop fsck -delete" you are telling the namenode to delete&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;files whose blocks cannot be located. This is fine for temporary files.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;Before running it however you should run "hdfs fsck&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;-list-corruptfileblocks", identify the reason why the blocks are missing.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;If the blocks are recoverable, you won't have to delete the files&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;themselves.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #000000;"&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&lt;SPAN style="color: #993300;"&gt;&lt;EM&gt;The Hbase won't be starting without executing&amp;nbsp;"hadoop fsck -delete" command and the&amp;nbsp;"hdfs fsck&amp;nbsp;-list-corruptfileblocks" out shows around 105 missing blocks. The missing block navigation (path to&amp;nbsp;the block) showing the date stamp of the time of shutdown. Is that means we are not allowed to shutdown and start the cluster according to our requirement?&amp;nbsp;&lt;/EM&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #000000;"&gt;&lt;SPAN&gt;&lt;SPAN style="color: #993300;"&gt;&lt;SPAN style="color: #000000;"&gt;&lt;SPAN&gt;"could only be replicated to 0 nodes, instead of 1" could mean the&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;datanodes are not healthy. Check the datanode logs under&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;/var/log/hadoop-hdfs on both nodes to see what the problem might be.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;If it's not clear, paste the relevant parts to pastebin and ​&lt;/SPAN&gt;&lt;SPAN&gt;​give us the URL​&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #000000;"&gt;&lt;SPAN&gt;&lt;SPAN style="color: #993300;"&gt;&lt;SPAN style="color: #000000;"&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&lt;SPAN style="color: #993300;"&gt;&lt;EM&gt;This error is happening after running the command "hadoop fsck -delete" . After this command executaion the Hbase will be starting up and the HDFS will be showing the error&amp;nbsp;"could only be replicated to 0 nodes, instead of 1" &amp;nbsp;&lt;/EM&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&lt;SPAN&gt;&lt;SPAN&gt;&lt;SPAN&gt;&lt;SPAN&gt;&lt;SPAN&gt;Our ultimate goal is&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;&lt;STRONG&gt;&lt;SPAN style="color: #000000;"&gt;&lt;SPAN style="line-height: 13.1999998092651px;"&gt;create a cluster with 2 nodes&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/LI&gt;&lt;LI&gt;&lt;STRONG&gt;&lt;SPAN style="color: #000000;"&gt;&lt;SPAN style="line-height: 13.1999998092651px;"&gt;Shutdown cluster after completing my tasks&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/LI&gt;&lt;LI&gt;&lt;STRONG&gt;&lt;SPAN style="color: #000000;"&gt;&lt;SPAN style="line-height: 13.1999998092651px;"&gt;Start the cluster when ever we need to make changes or demo purpose.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;Please let us know is the above scenario is possible or not in CDH 5.X.X.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks in advance&lt;/P&gt;&lt;P&gt;Akash.&amp;nbsp;&lt;/P&gt;&lt;HR /&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 01 Oct 2014 12:41:40 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDFS-error-could-only-be-replicated-to-0-nodes-instead-of-1/m-p/19582#M52772</guid>
      <dc:creator>Trinity</dc:creator>
      <dc:date>2014-10-01T12:41:40Z</dc:date>
    </item>
    <item>
      <title>Re: HDFS error: could only be replicated to 0 nodes, instead of 1</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDFS-error-could-only-be-replicated-to-0-nodes-instead-of-1/m-p/19866#M52773</link>
      <description>&lt;P&gt;Hi Team,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I got a solution.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;When we are selecting an instance with instance store for configuring CDH, the log files will be automatically stored to the instance store. While we stops the instance the data / logs in the instance store will be deleted and that results to showing error " Missing Blocks".&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;For avoiding this we need to remove instance store while launching&amp;nbsp;instance or we need to change&amp;nbsp;the log location to EBS volume manually after completing installation. I think its better to remove the instance store while launching the instance.&lt;/P&gt;&lt;P&gt;Thanks to all you..&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Cheers!!!!&lt;/P&gt;</description>
      <pubDate>Wed, 08 Oct 2014 18:49:48 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDFS-error-could-only-be-replicated-to-0-nodes-instead-of-1/m-p/19866#M52773</guid>
      <dc:creator>Trinity</dc:creator>
      <dc:date>2014-10-08T18:49:48Z</dc:date>
    </item>
  </channel>
</rss>

