Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Tell the NameNode where to find a "MISSING" block?

avatar
Explorer

We're in the process of decommissioning some of our older datanodes.  Today after decommissioning a node, HDFS is reporting a bunch of missing blocks.  Checking the HDFS, looks like the files in question are RF1; I'm assuming someone manually set them that way for some reason.

 

Since we're decommissioning, the actual blocks are still available in the data directories on the old node.  So I happily copied one of them, and its meta file, over to an active node.  They're in a different data directory, but the "subdirs" underneath "finalized" are the same.  The NameNode still can't see the block, though.  Is there a way for me to tell the NameNode "Hey, that block's over here now!" without actually restarting it?

 

I know I can probably recommission the node I took down, fix the RF on the files, and then decom it again, but these are big nodes (each holds about 2 TB of HDFS data), and decommissioning takes several hours.

7 REPLIES 7

avatar
Expert Contributor

Have you tried restart the DN where you copied the blocks to?

Also, try force a full block report: hdfs dfsadmin -triggerBlockReport <datanode_host:ipc_port>

avatar
Champion
You would need to add the copied directory in as a DFS directory. Even then, I don't know if the NN will pick them up as the same blocks since a different DN will have them on their report. Typically, if a DN reports a block that doesn't match the NN, the NN tells it to delete it.

The safe approach is to recommission the old node, change the replication factor, and then decommission it again.

avatar
Explorer
The directory I copied it to was already a known data directory. The subdirs already existed, in fact, so I assume HDFS was aware of them.

In the end, HDFS actually copied the missing blocks over from the decommissioned node. It's just annoying that an fsck reports those blocks as "MISSING" when it knows where they are and that it's going to copy them eventually.

avatar
Explorer
The cluster's active, so restarting that datanode isn't in the cards. In the end, the decom process actually copied the missing blocks over from the decom'd node. Not sure why it doesn't do that immediately as soon as it discovers that the blocks aren't replicated elsewhere.

To be fair, the Cloudera UI only reported under-replicated blocks; it never mentioned the missing blocks, and I was able to "hdfs dfs -cat" one of the files that was reporting it was corrupted. The only thing that mentioned the missing blocks was an "hdfs fsck /". I'm assuming that HDFS is aware of the decom process and will look for the blocks on the decom'ing server, but it doesn't note that in the fsck, which is pretty annoying.

avatar
Champion

if your cluster is managed by Cloudera manager , I would use it for decommissioning rather doing it manually , it more safe and recommended.

avatar
Explorer
Yep, I was using the Web UI. The Web UI never reported the missing blocks; only an "hdfs fsck /" noted them. HDFS eventually copied the missing blocks over from the decom'd server on its own.

avatar
Expert Contributor

Interesting story.

The decomm process would not complete until all blocks have at least 1 good replica on other DNs. (good replica = replicas that are not stale and on a DataNode that is not being decommissioned or already decommissioned)

 

DirectoryScanner in a DataNode scans the entire directory, reconciling inconsistency between in-memory block map and on-disk replica, so it would eventually pick up the added replica, just a matter of time.