Support Questions
Find answers, ask questions, and share your expertise

Best way of handling corrupt or missing blocks?

Hi,

What is best way of handling corrupt or missing blocks?

1 ACCEPTED SOLUTION

19 REPLIES 19

Mentor

@Rushikesh Deshmukh find out what these blocks are using fsck command, if not critical just delete them

@Artem Ervits, thanks for your reply.

Contributor

You can use the command - hdfs fsck / -delete to list corrupt of missing blocks and then follow the article above to fix the same.

Is there any way for recovering corrupt blocks or we just have to delete them?

@Rushikesh Deshmukh You have 2 options ...Another link

"The next step would be to determine the importance of the file, can it just be removed and copied back into place, or is there sensitive data that needs to be regenerated?

If it's easy enough just to replace the file, that's the route I would take."

@Neeraj Sabharwal

thanks for quick reply.

@Rushikesh Deshmukh Welcome! Help me to close the thread by accepting the best answer.

To identify "corrupt" or "missing" blocks, the command-line command 'hdfs fsck /path/to/file' can be used. Other tools also exist.

HDFS will attempt to recover the situation automatically. By default there are three replicas of any block in the cluster. so if HDFS detects that one replica of a block has become corrupt or damaged, HDFS will create a new replica of that block from a known-good replica, and will mark the damaged one for deletion.

The known-good state is determined by checksums which are recorded alongside the block by each DataNode.

The chances of two replicas of the same block becoming damaged is very small indeed. HDFS can - and does - recover from this situation because it has a third replica, with its checksum, from which further replicas can be created.

The chances of three replicas of the same block becoming damaged is so remote that it would suggest a significant failure somewhere else in the cluster. If this situation does occur, and all three replicas are damaged, then 'hdfs fsck' will report that block as "corrupt" - i.e. HDFS cannot self-heal the block from any of its replicas.

Rebuilding the data behind a corrupt block is a lengthy process (like any data recovery process). If this situation should arise, deep investigation of the health of the cluster as a whole should also be undertaken.

Note, if you are running your cluster in the cloud or use virtualization you may end up in a situation where multiple VMs run on the same physical host. In that case, a physical failure may have the grave consequences that you lose data, e.g. if all replica are stored on the same physical host. The likelihood of this depends on the cloud provider and may be high or remote. Be aware of this risk and prepare with copies on highly durable (object) storage like S3 for DR.

Expert Contributor

Adding to above answers, hadoop fsck might not give latest corrupt report.

Hadoop periodically runs check to determine corrupt blocks or when a client tries to read a file.

For details , please refer : https://issues.apache.org/jira/browse/HDFS-8126

Good point @Pradeep Bhadani, if you want to 'force' a check of specific blocks you can read the corresponding files, e.g. via Hive or MR, and run check command afterwards to see if an error was found. The reasoning is the expense incurred from checking a whole filesystem that may be PBs across hundreds of nodes.

Expert Contributor

New Contributor

Best way to find the list of missing blocks

Command :-

[hdfs@sandbox ~]$ hdfs fsck -list-corruptfileblocks

Output :-

Connecting to namenode via http://sandbox.hortonworks.com:50070/fsck?ugi=hdfs&listcorruptfileblocks=1&path=%2F

The filesystem under path '/' has 0 CORRUPT files

Thanks

Jay

Explorer

Thanks for this, this is great!

command "hdfs fsck / -delete" worked for me.

New Contributor

Pls make sure before deleting any corrupted blocks that they should be replicated successfully.

New Contributor

hdfs fsck / -delete" worked for me. Thanks

New Contributor

Hi, I'd like to share a situation we encountered where 99% of our HDFS blocks were reported missing and we were able to recover them.

We had a system with 2 namenodes with high availability enabled.

For some reason, under the data folders of the datanodes, i.e /data0x/hadoop/hdfs/data/current - we had 2 Block Pools folders listed (example of such folder is BP-1722964902-1.10.237.104-1541520732855).

There was one folder containing the IP of namenode1 and another containing the IP of namenode 2.

All the data was under the BlockPool of namenode 1, but inside the VERSION files of the namenodes (/data0x/hadoop/hdfs/namenode/current/) the BlockPool id and the namespace ID were of namenode 2 - the namenode was looking for blocks in the wrong block pool folder.

I don't know how we got to the point of having 2 block pools folders, but we did.

In order to fix the problem - and get HDFS healthy again - we just needed to update the VERSION file on all the namenode disks (on both NN machines) and on all the journal node disks (on all JN machines), to point to Namenode 1.

We then restarted HDFS and made sure all the blocks are reported and there's no more missing blocks.

; ;