Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Namenode health

Namenode health

Explorer

There are 1 missing blocks. The following files may be corrupted:

 

blk_1073742098 /apps/hbase/data/WALs/hostname,16020,1566796412927-splitting/hostname%2C16020%2C1566796412927.meta.1566800165041.meta

Please check the logs or run fsck in order to identify the missing blocks. See the Hadoop FAQ for common causes and potential solutions.

3 REPLIES 3
Highlighted

Re: Namenode health

Mentor

@Manoj690 

Since the hbase meta  is corrupt, I think this is the procedure to resolve that issue

How to fix corrupted files for an HBase table

 

Re: Namenode health

Explorer

not solved with that link, could you tell me the steps to remove corrupted files.

Re: Namenode health

New Contributor

You can use
  hdfs fsck /
to determine which files are having problems. Look through the output for missing or corrupt blocks (ignore under-replicated blocks for now). This command is really verbose especially on a large HDFS filesystem so I normally get down to the meaningful output with
  hdfs fsck / | egrep -v '^\.+$' | grep -v eplica
which ignores lines with nothing but dots and lines talking about replication.
Once you find a file that is corrupt
  hdfs fsck /path/to/corrupt/file -locations -blocks -files
Use that output to determine where blocks might live. If the file is larger than your block size it might have multiple blocks.
You can use the reported block numbers to go around to the datanodes and the namenode logs searching for the machine or machines on which the blocks lived. Try looking for filesystem errors on those machines. Missing mount points, datanode not running, file system reformatted/reprovisioned. If you can find a problem in that way and bring the block back online that file will be healthy again.
Lather rinse and repeat until all files are healthy or you exhaust all alternatives looking for the blocks.
Once you determine what happened and you cannot recover any more blocks, just use the
  hdfs fs -rm /path/to/file/with/permanently/missing/blocks
command to get your HDFS filesystem back to healthy so you can start tracking new errors as they occur.