Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Best way of handling corrupt or missing blocks?

avatar

Hi,

What is best way of handling corrupt or missing blocks?

1 ACCEPTED SOLUTION

avatar
Master Mentor
19 REPLIES 19

avatar
Contributor

Note, if you are running your cluster in the cloud or use virtualization you may end up in a situation where multiple VMs run on the same physical host. In that case, a physical failure may have the grave consequences that you lose data, e.g. if all replica are stored on the same physical host. The likelihood of this depends on the cloud provider and may be high or remote. Be aware of this risk and prepare with copies on highly durable (object) storage like S3 for DR.

avatar
Super Collaborator

Adding to above answers, hadoop fsck might not give latest corrupt report.

Hadoop periodically runs check to determine corrupt blocks or when a client tries to read a file.

For details , please refer : https://issues.apache.org/jira/browse/HDFS-8126

avatar
Contributor

Good point @Pradeep Bhadani, if you want to 'force' a check of specific blocks you can read the corresponding files, e.g. via Hive or MR, and run check command afterwards to see if an error was found. The reasoning is the expense incurred from checking a whole filesystem that may be PBs across hundreds of nodes.

avatar
Super Collaborator

avatar
New Contributor

Best way to find the list of missing blocks

Command :-

[hdfs@sandbox ~]$ hdfs fsck -list-corruptfileblocks

Output :-

Connecting to namenode via http://sandbox.hortonworks.com:50070/fsck?ugi=hdfs&listcorruptfileblocks=1&path=%2F

The filesystem under path '/' has 0 CORRUPT files

Thanks

Jay

avatar
Contributor

Thanks for this, this is great!

avatar
Contributor

command "hdfs fsck / -delete" worked for me.

avatar
New Contributor

Pls make sure before deleting any corrupted blocks that they should be replicated successfully.

avatar
New Contributor

hdfs fsck / -delete" worked for me. Thanks

avatar
New Contributor

Hi, I'd like to share a situation we encountered where 99% of our HDFS blocks were reported missing and we were able to recover them.

We had a system with 2 namenodes with high availability enabled.

For some reason, under the data folders of the datanodes, i.e /data0x/hadoop/hdfs/data/current - we had 2 Block Pools folders listed (example of such folder is BP-1722964902-1.10.237.104-1541520732855).

There was one folder containing the IP of namenode1 and another containing the IP of namenode 2.

All the data was under the BlockPool of namenode 1, but inside the VERSION files of the namenodes (/data0x/hadoop/hdfs/namenode/current/) the BlockPool id and the namespace ID were of namenode 2 - the namenode was looking for blocks in the wrong block pool folder.

I don't know how we got to the point of having 2 block pools folders, but we did.

In order to fix the problem - and get HDFS healthy again - we just needed to update the VERSION file on all the namenode disks (on both NN machines) and on all the journal node disks (on all JN machines), to point to Namenode 1.

We then restarted HDFS and made sure all the blocks are reported and there's no more missing blocks.