Support Questions

Find answers, ask questions, and share your expertise

data disk unmounted

avatar
Expert Contributor

Hi, Somehow or somebody unmounted a data disk from one of the data nodes and mounted it back again, so the data is still in there . So when such thing happens, will the data residing in that disk is corrupted and i have to clean that disk and mount it back ? or what do i do?

What exactly is the procuedure I follow to get this disk back into the datanode considering this a production cluster.

Thanks...

1 ACCEPTED SOLUTION

avatar

Hi @PJ,

See https://issues.apache.org/jira/browse/HDFS-4239 for a good relevant discussion.

So shut down the datanode, clean the disk, remount and restart the datanode. Because of the data replication factor of 3 from HDFS that shouldn't be a problem. Make sure the new mount is in the dfs.data.dir config.

Additionally you can also decomission the node and recommission following the steps here:

https://community.hortonworks.com/articles/3131/replacing-disk-on-datanode-hosts.html

View solution in original post

2 REPLIES 2

avatar

Hi @PJ,

See https://issues.apache.org/jira/browse/HDFS-4239 for a good relevant discussion.

So shut down the datanode, clean the disk, remount and restart the datanode. Because of the data replication factor of 3 from HDFS that shouldn't be a problem. Make sure the new mount is in the dfs.data.dir config.

Additionally you can also decomission the node and recommission following the steps here:

https://community.hortonworks.com/articles/3131/replacing-disk-on-datanode-hosts.html

avatar
Rising Star

1. Run this command as hdfs user.

hdfs fsck /

2. If you return CORRUPT files, then try it below command.

hdfs fsck -list-corruptfileblocks

hdfs fsck $hdfsfilepath -location -blocks -files

hdfs fsck -delete $corrupt_files_path

3. Complete above procedure, and re-run 1. command.

hdfs fsck /