- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
data disk unmounted
- Labels:
-
Apache Hadoop
Created ‎05-11-2017 07:07 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi, Somehow or somebody unmounted a data disk from one of the data nodes and mounted it back again, so the data is still in there . So when such thing happens, will the data residing in that disk is corrupted and i have to clean that disk and mount it back ? or what do i do?
What exactly is the procuedure I follow to get this disk back into the datanode considering this a production cluster.
Thanks...
Created ‎05-11-2017 07:41 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @PJ,
See https://issues.apache.org/jira/browse/HDFS-4239 for a good relevant discussion.
So shut down the datanode, clean the disk, remount and restart the datanode. Because of the data replication factor of 3 from HDFS that shouldn't be a problem. Make sure the new mount is in the dfs.data.dir config.
Additionally you can also decomission the node and recommission following the steps here:
https://community.hortonworks.com/articles/3131/replacing-disk-on-datanode-hosts.html
Created ‎05-11-2017 07:41 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @PJ,
See https://issues.apache.org/jira/browse/HDFS-4239 for a good relevant discussion.
So shut down the datanode, clean the disk, remount and restart the datanode. Because of the data replication factor of 3 from HDFS that shouldn't be a problem. Make sure the new mount is in the dfs.data.dir config.
Additionally you can also decomission the node and recommission following the steps here:
https://community.hortonworks.com/articles/3131/replacing-disk-on-datanode-hosts.html
Created ‎05-11-2017 11:44 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
1. Run this command as hdfs user.
hdfs fsck /
2. If you return CORRUPT files, then try it below command.
hdfs fsck -list-corruptfileblocks
hdfs fsck $hdfsfilepath -location -blocks -files
hdfs fsck -delete $corrupt_files_path
3. Complete above procedure, and re-run 1. command.
hdfs fsck /
