Community Articles

Find and share helpful community-sourced technical articles.
Labels (1)
avatar

This article gives an 8 step procedure to recover the accidentally deleted HDFS file which is not even available in the Trash. Use these procedure with caution on production system. I strongly suggest to take supports help if you are not familiar with the internal workings.

IMPORTANT :

Please make sure NameNode is stopped immediatly after file deletion otherwise it's hard to recover as namenode already send out block deletion request to datanode causing physical block might get deleted by datanode

Always take backup of "fsimage" and "edits" file before you perform this step.

1).Lets create a sample file in HDFS

su - hdfs

hadoop fs -put /etc/passwd /tmp

[hdfs@hdpnode1 current]$ hadoop fs -ls /tmp/passwd

-rw-r--r-- 3 hdfs hdfs 2370 2016-04-06 12:45 /tmp/passwd

2).Delete the file and also make sure that the file doesnt go to trash.

hadoop fs -rmr -skipTrash /tmp/passwd

3).Stop the HDFS service via Ambari.

4). Go to Namenode metadata directory(dfs.namenode.name.dir).In my lab environment it is set to /data/hadoop/hdfs/namenodecd /data/hadoop/hdfs/namenode cd current Look for a file with the name starting “edits_inprogress”. This should be most recent file where all the transactions are pushed to.

4). Convert the binary edits file to xml format.To convert the edits_inprogress file to XML,Use

hdfs oev -i edits_inprogress_0000000000000001689 -o edits_inprogress_0000000000000001689.xml

5). Open the file and look for the transaction which recorded the delete operation of the file /tmp/passwdIn our case it looked like below.

<RECORD> <OPCODE>OP_DELETE</OPCODE> <DATA> <TXID>1792</TXID> <LENGTH>0</LENGTH> <PATH>/tmp/passwd</PATH> <TIMESTAMP>1459943154392</TIMESTAMP> <RPC_CLIENTID>7aa59270-a89f-4113-98b5-6031ba898a8c</RPC_CLIENTID> <RPC_CALLID>1</RPC_CALLID> </DATA> </RECORD>

Remove the complete entry (RECORD to RECORD) and save the XML file.6). Convert back the XML to binary format.

Use ,

cd /data/hadoop/hdfs/namenode/current hdfs oev -i edits_inprogress_0000000000000001689.xml -o edits_inprogress_0000000000000001689 -p binary

7). If the interested transaction entry in the edits_inprogress is the last entry , then you could just start up the name node,which brings you back the lost /tmp/passwd file to the /tmp directory. If you see any further transaction post the accidental delete, then you need to run name node recovery command as below.

hadoop namenode -recover 16/04/06 13:07:58 INFO namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = hdpnode1.world.com/192.168.56.42 STARTUP_MSG: args = [-recover] STARTUP_MSG: version = 2.7.1.2.4.0.0-169 STARTUP_MSG: classpath = / <OUTPUT TRUNCATED FOR BREVITY> TARTUP_MSG: build = git@github.com:hortonworks/hadoop.git -r 26104d8ac833884c8776473823007f176854f2eb; compiled by 'jenkins' on 2016-02-10T06:18Z STARTUP_MSG: java = 1.8.0_60 ************************************************************/ 16/04/06 13:07:58 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 16/04/06 13:07:58 INFO namenode.NameNode: createNameNode [-recover] You have selected Metadata Recovery mode. This mode is intended to recover lost metadata on a corrupt filesystem. Metadata recovery mode often permanently deletes data from your HDFS filesystem. Please back up your edit log and fsimage before trying this!Are you ready to proceed? (Y/N) (Y or N) Y You would see error of something like the below one.

16/04/06 13:08:09 ERROR namenode.FSImage: Error replaying edit log at offset 2186. Expected transaction ID was 1798 Recent opcode offsets: 1825 1946 2081 2186 org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream$PrematureEOFException: got premature end-of-file at txid 1797; expected file to go up to 1798 at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1707) 16/04/06 13:08:09 ERROR namenode.MetaRecoveryContext: We failed to read txId 1798<OUTPUT TRUNCATED FOR BREVITY>org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1707) 16/04/06 13:08:09 ERROR namenode.MetaRecoveryContext: We failed to read txId 1798

You now have following options, Enter 'c' to continue, skipping the bad section in the log Enter 's' to stop reading the edit log here, abandoning any later edits Enter 'q' to quit without savingEnter 'a' to always select the first choice in the future without prompting. (c/s/q/a)Enter 'c' to continue, skipping the bad section in the log Enter 's' to stop reading the edit log here, abandoning any later edits Enter 'q' to quit without saving Enter 'a' to always select the first choice in the future without prompting. (c/s/q/a)ENTER c 16/04/06 13:08:18 INFO namenode.MetaRecoveryContext: Continuing 16/04/06 13:08:18 INFO namenode.FSImage:

Edits file /data/hadoop/hdfs/namenode/current/edits_0000000000000001777-0000000000000001798 of size 2186 edits # 21 loaded in 9 seconds 16/04/06 13:08:18 INFO namenode.FSNamesystem: Need to save fs image? false (staleImage=false, haEnabled=false, isRollingUpgrade=false) 16/04/06 13:08:18 INFO namenode.FSEditLog: Starting log segment at 1799 16/04/06 13:08:19 INFO namenode.NameCache: initialized with 0 entries 0 lookups 16/04/06 13:08:19 INFO namenode.FSNamesystem: Finished loading FSImage in 10125 msecs 16/04/06 13:08:19 INFO namenode.FSImage: Save namespace ... 16/04/06 13:08:19 INFO namenode.FSEditLog: Ending log segment 1799 16/04/06 13:08:19 INFO namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms):

<OUTPUT TRUNCATED FOR BREVITY>

/data/hadoop/hdfs/namenode/current/edits_0000000000000001801-0000000000000001802 16/04/06 13:08:19 INFO namenode.FSNamesystem: Stopping services started for standby state 16/04/06 13:08:19 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at hdpnode1.world.com/192.168.56.42 ************************************************************/

The above recovery command does the the realignment of the HDFS transaction id in an orderly manner.

8).Now Restart the the HDFS via Ambari and check for the lost file using

hadoop fs -ls /tmp/passwd

41,890 Views
Comments

How long are the blocks around? They will be cleaned up eventually and then changing the fsimage doesnt help you anymore right? But this article explains a lot about the way the namenode works. Very cool.

@jramakrishnan - Very cool. I think it will not work after NN asks DNs to delete blocks and DN deletes those

You are correct. These steps may not make sense if you have deleted the file and realised after dfs.namenode.startup.delay.block.deletion.sec

@Benjamin Leonhardi

This is based on the parameter : dfs.namenode.startup.delay.block.deletion.sec

Hi,I try to do this in namenode ha with journalnode,when the hdfs cluster Up,but /tmp/passwd recover failed.

which step for namenode ha?

dfs.namenode.startup.delay.block.deletion.sec is 3600 ,I Test this within 1hours。

Useful tips. Thanks Jag for sharing

avatar
Expert Contributor

@Jagatheesh Ramakrishnan Appreciate your effort and writing this data recovery part. Can you please add a note on this article? Namenode should be stopped very immediate after file deletion otherwise it's hard to recover because namenode already send out block deletion request to datanode. So physical block might get deleted by datanode.