Support Questions

Find answers, ask questions, and share your expertise

Percentage under replicated blocks: 100.00%

avatar
Explorer

Brand new to Cloudera I have used Cloudera manager to install on a single node for trial purposes. After install there is a helth warning  on HDFS -
Under-Replicated Blocks:
 

283 under replicated blocks in the cluster. 283 total blocks in the cluster. Percentage under replicated blocks: 100.00%. Critical threshold: 40.00%.

 

Not sure what could cause this I have Googled around and not really found anything.

1 ACCEPTED SOLUTION

avatar
Your default replication factor is probably 3. Since you only have one node, that's impossible to satisfy.

On single-node clusters, you should reduce the replication factor to 1 in HDFS configuration in Cloudera Manager. Note that this only affects newly created files. Your existing files will still try to replicate to 3 hosts unless you change them explicitly (you should be able to google how to change a file's replication factor).

Thanks,
Darren

View solution in original post

7 REPLIES 7

avatar
Your default replication factor is probably 3. Since you only have one node, that's impossible to satisfy.

On single-node clusters, you should reduce the replication factor to 1 in HDFS configuration in Cloudera Manager. Note that this only affects newly created files. Your existing files will still try to replicate to 3 hosts unless you change them explicitly (you should be able to google how to change a file's replication factor).

Thanks,
Darren

avatar
New Contributor

I had a 3 node cluster, in that node#1 has both DN + NN and node#2 & node#3 has only DN.

Now Node#2 & Node#3 were down. And i had free RAM and free disk space available in Node#1.

 

Ideally Hadoop should replicate the blocks from dead nodes i.e. Node#2&3 and should be recreated in Node#1 itself to maintain replication factor 3. But it is not happening. I had waited for hours, but not seeing any progress and showing the same health issues in cloudera manager.

 

 

Note: The same behaviour is also observed when only Node#3 is down and Node#1 & Node#2 are up and running.

 

Can someone please respond why this is happening?

avatar
Expert Contributor

Hi @Manindar,

 

The replication data just happens on other dataNodes, hadoop does not replicate in the same host.

 

If you only have one node Up, you only will hava one copy of data.

 

Can you post your nameNode log?? Thanks.

 

 

Regards,

Manu.

avatar
Master Guru

@Manindar,

 

Indeed, if you have a replication factor of 3 and only one DataNode is alive, then there is nowhere to replicate.  3 nodes with a replication factor of 3 means the blocks are already on that one node and there is nothing to replicate/move.

 

 

avatar
Rising Star

Please use below command to chang ethe replication factor for existing data on Hadoop FS

 

 

$ hadoop fs -setrep -R -w 2 /

 

Note: You may mention number according to your requirement to have the replication factor.

 

$ hdfs fsck / -delete

 

avatar
New Contributor
I faced the same issue, even though I changed the replication factor, the error still showed up.

After running , hadoop fs -setrep -R -w 2, the error is fixed.

Can you please let us know what's the purpose of hdfs fsck / -delete command? It will delete under-replicated or corrupted blocks?

avatar
Explorer

Hi!

 

Yes - from documentation (https://hadoop.apache.org/docs/r1.2.1/commands_manual.html#fsck😞

fsck

Runs a HDFS filesystem checking utility.

 

COMMAND_OPTIONDescription
-deleteDelete corrupted files.