Support Questions

Find answers, ask questions, and share your expertise

Failed to start namenode. java.io.FileNotFoundException: File does not exist

avatar
Explorer

Facing this issue with the namenode down. It is a parcel based CDH 5.15.2 installation on AWS.

 

9:15:18.262 PM ERROR NameNode
Failed to start namenode.
java.io.FileNotFoundException: File does not exist: /user/spark/spark2ApplicationHistory/.5e9b4c52-032a-4469-b278-80aa6254cfdf
	at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:66)
	at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:56)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:429)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:232)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:141)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:903)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:756)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:324)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1152)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:799)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:614)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:676)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:844)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:823)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1547)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1615)
9:15:18.264 PM INFO ExitUtil
Exiting with status 1
9:15:18.265 PM INFO NameNode
SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at xxxx.xxxx.xxxx/**.**.**.**
************************************************************/

 

 

 

Stuck from two days. Any help will be highly appreciated.

 

Cheers,

1 ACCEPTED SOLUTION

avatar
Mentor
This looks like a case of edit logs getting reordered. As @bgooley noted, it is similar to HDFS-12369, where the OP_CLOSE is appearing after OP_DELETE causing the file to be absent when replaying the edits.

The simplest fix, depending on if this is the only file instance of the reordered issue in your edit logs, would be to run the NameNode manually in an edits-recovery mode and "skip" this edit when it catches the error. The rest of the edits should apply normally and let you start up your NameNode.

The recovery mode of NameNode is detailed at https://blog.cloudera.com/blog/2012/05/namenode-recovery-tools-for-the-hadoop-distributed-file-syste...

If you're using CM, you'll need to use the NameNode's most recent generated configuration directory under /var/run/cloudera-scm-agent/process/ on the NameNode host as the HADOOP_CONF_DIR, while logged in as 'hdfs' user, before invoking the manual NameNode startup command.

Once you've followed the prompts and the NameNode appears to start up, quit out/kill it to restart from Cloudera Manager normally.

If you have a Support subscription, I'd recommend filing a case for this, as the process could get more involved depending on how widespread this issue is.

View solution in original post

4 REPLIES 4

avatar
Expert Contributor

Hi @urbanlad20,

 

Your error is: 

File does not exist: /user/spark/spark2ApplicationHistory/.5e9b4c52-032a-4469-b278-80aa6254cfdf

 

 

 Can you restart the spark installation ? it seems to be related.

 

Regards,

Manu.

avatar
Explorer
I've already tried deleting spark, then restarted the namenode, but it didn't help. Namenode is still looking for that file. Again I installed spark and restarted the namenode, still no luck

avatar
Master Guru

@urbanlad20 ,

 

That stack reminds me of HDFS-12369 but I also thought it should be fixed in CDH 5.12.2.

I think it would be good to have HDFS folks look at this; I'll move thread to HDFS.

avatar
Mentor
This looks like a case of edit logs getting reordered. As @bgooley noted, it is similar to HDFS-12369, where the OP_CLOSE is appearing after OP_DELETE causing the file to be absent when replaying the edits.

The simplest fix, depending on if this is the only file instance of the reordered issue in your edit logs, would be to run the NameNode manually in an edits-recovery mode and "skip" this edit when it catches the error. The rest of the edits should apply normally and let you start up your NameNode.

The recovery mode of NameNode is detailed at https://blog.cloudera.com/blog/2012/05/namenode-recovery-tools-for-the-hadoop-distributed-file-syste...

If you're using CM, you'll need to use the NameNode's most recent generated configuration directory under /var/run/cloudera-scm-agent/process/ on the NameNode host as the HADOOP_CONF_DIR, while logged in as 'hdfs' user, before invoking the manual NameNode startup command.

Once you've followed the prompts and the NameNode appears to start up, quit out/kill it to restart from Cloudera Manager normally.

If you have a Support subscription, I'd recommend filing a case for this, as the process could get more involved depending on how widespread this issue is.