Created 02-18-2017 07:17 AM
I try to start the namenode severa times. but its not starting, so i check the log file. I found the below exception
2017-02-18 15:05:23,548 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. java.io.IOException: There appears to be a gap in the edit log. We expected txid 1, but got txid 44. at org.apache.hadoop.hdfs.server.namenode.MetaRecoveryContext.editLogLoaderPrompt(MetaRecoveryContext.java:94) at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:215) at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:143) at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:843) at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:698) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:294) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:975) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:681) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:585) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:645) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:812) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:796) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559) 2017-02-18 15:05:23,552 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1 2017-02-18 15:05:23,554 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at aruna/127.0.1.1 ************************************************************/
It says gap in the edit log. I search the google and still not find any solution ?
All the other deamons are running
aruna@aruna:~/hadoop-2.7.3/sbin$ sudo jps 4177 DataNode 4545 ResourceManager 5042 JobHistoryServer 5605 Jps 4854 NodeManager 4360 SecondaryNameNode aruna@aruna:~/hadoop-2.7.3/sbin$
Created 02-18-2017 02:44 PM
Check if the service is up or not? It should open port 9000 if you have not changed the value of :
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://localhost:9000</value> </property> </configuration>
- Check port. If port is not opened then start the service or check configuration if it is supposed to start on 9000 port or not?
netstat -tnlpa | grep 9000
.
See:
https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html
Created 02-20-2017 02:58 AM
@Aruna Sameera As mentioned earlier as well that in order to maintain a good forum/community it is best that you ask one query per thread and mark the answer as "Accepted" when your answer is properly answered and was helpful. Keep asking different queries in a single thread and not Accepting answers that are helpful is not a good forum etiquette.
Created 02-20-2017 05:00 PM
ok @Jay SenSharma . I did those things for all my questions