Created on 08-13-2014 04:40 PM - edited 09-16-2022 02:04 AM
Hi,
I am getting below errors when I ran "hadoop fsck /" command. Please help me on this.
/user/accumulo/.Trash/Current/
Created 08-13-2014 05:04 PM
The error message says some Accumulo files in the Trash folder only have 3 replicas whereas there should be 5. The default value of dfs.replication is 3. By default, dfs.replication.max is set to 512. This is the maximum number of replicas for a block. Accumulo checks if dfs.replication.max is set and if not, uses 5 for a replication factor. What version of CDH are you running?
All this is detailed in bug ACCUMULO-683
https://issues.apache.org/jira/browse/ACCUMULO-683
So you can do the following :
- set dfs.replication.max to 3
- set table.file.replication for the !METADATA also to 3
- Use "hadoop fs -setrep" to change the replication factor of those files to 3
http://hadoop.apache.org/docs/r0.18.3/hdfs_shell.html#setrep
- run fsck and confirm you don't get this warning any more
Regards
Gautam
Created 08-13-2014 05:04 PM
The error message says some Accumulo files in the Trash folder only have 3 replicas whereas there should be 5. The default value of dfs.replication is 3. By default, dfs.replication.max is set to 512. This is the maximum number of replicas for a block. Accumulo checks if dfs.replication.max is set and if not, uses 5 for a replication factor. What version of CDH are you running?
All this is detailed in bug ACCUMULO-683
https://issues.apache.org/jira/browse/ACCUMULO-683
So you can do the following :
- set dfs.replication.max to 3
- set table.file.replication for the !METADATA also to 3
- Use "hadoop fs -setrep" to change the replication factor of those files to 3
http://hadoop.apache.org/docs/r0.18.3/hdfs_shell.html#setrep
- run fsck and confirm you don't get this warning any more
Regards
Gautam
Created 08-13-2014 06:09 PM
Created 08-13-2014 06:18 PM
Created 08-14-2014 10:56 AM
Thank you so much.It worked for me.It's a good solution.