Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Who Me Too'd this solution

Re: HDFS - Under-Replicated Blocks, Corrupt Blocks

The error message says some Accumulo files in the Trash folder only have 3 replicas whereas there should be 5. The default value of dfs.replication is 3. By default, dfs.replication.max is set to 512. This is the maximum number of replicas for a block. Accumulo checks if dfs.replication.max is set and if not, uses 5 for a replication factor. What version of CDH are you running?

 

All this is detailed in bug ACCUMULO-683

https://issues.apache.org/jira/browse/ACCUMULO-683

 

 So you can do the following :

- set dfs.replication.max to 3

- set table.file.replication for the !METADATA also to 3

- Use "hadoop fs -setrep" to change the replication factor of those files to 3

http://hadoop.apache.org/docs/r0.18.3/hdfs_shell.html#setrep

- run fsck and confirm you don't get this warning any more

 

Regards
Gautam

 

Regards,
Gautam Gopalakrishnan
Who Me Too'd this solution