- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
HDFS - Under-Replicated Blocks, Corrupt Blocks
- Labels:
-
Apache Accumulo
-
Apache Hadoop
-
HDFS
Created on 08-13-2014 04:40 PM - edited 09-16-2022 02:04 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I am getting below errors when I ran "hadoop fsck /" command. Please help me on this.
/user/accumulo/.Trash/Current/
.
/user/accumulo/.Trash/Current/accumulo/tables/+r/root_tablet/delete+A00008z6.rf+A00008z4.rf: Under replicated BP-349021044-10.180.6.236-1406825419975:blk_1073758933_18161. Target Replicas is 5 but found 3 replica(s).
.
/user/accumulo/.Trash/Current/accumulo/tables/+r/root_tablet/delete+A00008z6.rf+F00008z5.rf: Under replicated BP-349021044-10.180.6.236-1406825419975:blk_1073758938_18166. Target Replicas is 5 but found 3 replica(s).
.
/user/accumulo/.Trash/Current/accumulo/tables/+r/root_tablet/delete+A00008z8.rf+A00008z6.rf: Under replicated BP-349021044-10.180.6.236-1406825419975:blk_1073758939_18167. Target Replicas is 5 but found 3 replica(s).
.
/user/accumulo/.Trash/Current/accumulo/tables/+r/root_tablet/delete+A00008z8.rf+F00008z7.rf: Under replicated BP-349021044-10.180.6.236-1406825419975:blk_1073758941_18169. Target Replicas is 5 but found 3 replica(s).
.
/user/accumulo/.Trash/Current/accumulo/tables/+r/root_tablet/delete+A00008za.rf+A00008z8.rf: Under replicated BP-349021044-10.180.6.236-1406825419975:blk_1073758942_18170. Target Replicas is 5 but found 3 replica(s).
.
/user/accumulo/.Trash/Current/accumulo/tables/+r/root_tablet/delete+A00008za.rf+F00008z9.rf: Under replicated BP-349021044-10.180.6.236-1406825419975:blk_1073758944_18172. Target Replicas is 5 but found 3 replica(s).
............................
....................................................................................................
....................................................................................................
......................................................Status: HEALTHY
Total size: 212515269 B (Total open files size: 558 B)
Total dirs: 4197
Total files: 1654
Total symlinks: 0 (Files currently being written: 6)
Total blocks (validated): 1650 (avg. block size 128797 B) (Total open file blocks (not validated): 6)
Minimally replicated blocks: 1650 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 1341 (81.27273 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 3
Average block replication: 2.9921212
Corrupt blocks: 0
Missing replicas: 2669 (35.090717 %)
Number of data-nodes: 3
Number of racks: 1
FSCK ended at Wed Aug 13 19:15:42 EDT 2014 in 77 milliseconds
Created 08-13-2014 05:04 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The error message says some Accumulo files in the Trash folder only have 3 replicas whereas there should be 5. The default value of dfs.replication is 3. By default, dfs.replication.max is set to 512. This is the maximum number of replicas for a block. Accumulo checks if dfs.replication.max is set and if not, uses 5 for a replication factor. What version of CDH are you running?
All this is detailed in bug ACCUMULO-683
https://issues.apache.org/jira/browse/ACCUMULO-683
So you can do the following :
- set dfs.replication.max to 3
- set table.file.replication for the !METADATA also to 3
- Use "hadoop fs -setrep" to change the replication factor of those files to 3
http://hadoop.apache.org/docs/r0.18.3/hdfs_shell.html#setrep
- run fsck and confirm you don't get this warning any more
Regards
Gautam
Gautam Gopalakrishnan
Created 08-13-2014 05:04 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The error message says some Accumulo files in the Trash folder only have 3 replicas whereas there should be 5. The default value of dfs.replication is 3. By default, dfs.replication.max is set to 512. This is the maximum number of replicas for a block. Accumulo checks if dfs.replication.max is set and if not, uses 5 for a replication factor. What version of CDH are you running?
All this is detailed in bug ACCUMULO-683
https://issues.apache.org/jira/browse/ACCUMULO-683
So you can do the following :
- set dfs.replication.max to 3
- set table.file.replication for the !METADATA also to 3
- Use "hadoop fs -setrep" to change the replication factor of those files to 3
http://hadoop.apache.org/docs/r0.18.3/hdfs_shell.html#setrep
- run fsck and confirm you don't get this warning any more
Regards
Gautam
Gautam Gopalakrishnan
Created 08-13-2014 06:09 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Created 08-13-2014 06:18 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
That shows the default in CDH 5.0.0 is 512 as well. Please try the steps I
provided earlier and let me know if it helped.
Gautam Gopalakrishnan
Created 08-14-2014 10:56 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you so much.It worked for me.It's a good solution.
