Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

namdenode failed

namdenode failed

New Contributor

Hi ,

My hadoop version is 0.20.203.0. The namenode running on my hadoop clulser was shut down. I checked the logs, and found the error message only in the secondary name logs:

 

2014-09-27 22:18:54,930 WARN org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Checkpoint done. New Image Size: 29552383
2014-09-27 22:19:42,792 INFO org.mortbay.log: org.mortbay.io.nio.SelectorManager$SelectSet@8135daf JVM BUG(s) - injecting delay2 times
2014-09-27 22:19:42,792 INFO org.mortbay.log: org.mortbay.io.nio.SelectorManager$SelectSet@8135daf JVM BUG(s) - recreating selector 2 times, canceled keys 38 times
2014-09-27 23:18:55,508 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
2014-09-27 23:18:55,508 FATAL org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Fatal Error : All storage directories are inaccessible.
2014-09-27 23:18:55,509 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: SHUTDOWN_MSG:

 

There was anonther error message apprears in one of my datanodes:

 

2014-09-27 01:03:58,535 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(10.75.6.51:50010, storageID=DS-532990984-10.75.6.51-50010-1343295370699, infoPort=50075, ipcPort=50020):DataXceiver
org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: No space left on device
        at org.apache.hadoop.hdfs.server.datanode.DataNode.checkDiskError(DataNode.java:770)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:475)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:528)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:397)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:107)
        at java.lang.Thread.run(Thread.java:662)

 

Not sure whether this is the root cause of the namenode shutting down issue?

 

New error raised when I was trying to restart the namenode?

 

2014-09-28 11:25:06,202 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
java.io.IOException: Incorrect data format. logVersion is -31 but writables.length is 0.
        at org.apache.hadoop.hdfs.server.namenode.FSEditLog.loadFSEdits(FSEditLog.java:542)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:1009)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:827)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:365)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:97)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:379)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:353)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:254)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:434)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1153)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1162)

 

Is there anyone knows about it ?  is it possible to fix the imagne and editor files?

 

 

Thanks,

Kevin Jin

2 REPLIES 2
Highlighted

Re: namdenode failed

Expert Contributor
Looks like your edits file is corrupted, can you try replacing your current from previous checkpoints and see if that resolves the issue...

Have you upgraded to a new version of hadoop recently ?
Em Jay

Re: namdenode failed

Expert Contributor
Also check whether your namenode metadata directory does have enough space to write new metadata..
Em Jay