Support Questions

Find answers, ask questions, and share your expertise
Celebrating as our community reaches 100,000 members! Thank you!

namenode + hadoop namenode -recover


we have Hadoop cluster with 2 name-nodes ( active standby ) and 12 data nodes


all 12 data-nodes machines have disks for HDFS


we are before the action of `hadoop namenode -recover` , and that because we suspect about corrupted files as fsimage_0000000000001253918 or edits_0000000000001203337-0000000000001214475 etc


so to recover the hdfs meta data we can do the following






$ hadoop namenode -recover
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

21/01/19 17:56:35 INFO namenode.NameNode: STARTUP_MSG:
STARTUP_MSG: Starting NameNode
STARTUP_MSG: user = hdfs
STARTUP_MSG: args = [-recover]
STARTUP_MSG: version =

21/01/19 17:56:35 INFO namenode.NameNode: createNameNode [-recover]
You have selected Metadata Recovery mode. This mode is intended to recover lost metadata on a corrupt filesystem. Metadata recovery mode often permanently deletes data from your HDFS filesystem. Please back up your edit log and fsimage before trying this!

Are you ready to proceed? (Y/N)
(Y or N) y






the question is:

dose this action could also affected the data itself on the data-nodes machines ?


or only the meta data on namenode machines?


Master Guru

@mike_bronson7 Looks like your question is covered here:

Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.

Expert Contributor

Adding to @GangWar :-


To your question - dose this action could also affected the data itself on the data-nodes machines ?


No it doesnt affect data on datanode directly. This is metadata operation on namenode which when need to be run  when NameNode fails to progress through the edits or fsimage  then the NameNode may need to be started with -recover option. 


Since the metadata has reference to the blocks on datanode, hence this is a critical operation and may incur data loss.