Support Questions
Find answers, ask questions, and share your expertise

Namenode not starting For HDP 3.1.5 HDFS

Hi Experts,

We are currently in middle of upgrade from HDP 2.6.5 to HDP 3.1.5. Currently we are just before finalizing the upgrade state.

As instructed we wanted to make sure everything is fine before we finalize the upgrade.

However when we checked we found that the HDFS namenode is not starting. We have HA configuration for our HDFS and both the ZKFAILOVERCONTROLLER is running but HDFS namenode or standby namenode is not starting and it keep on trying without any success.

I am attaching the logs and screenshots for your analysis.Please help.

Thanks and Regards,
Ananya

1 ACCEPTED SOLUTION

Accepted Solutions

@Ananya_Misra  Namenode restart failed with below exception :

 

'org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /hadoop/hdfs/namenode is in an inconsistent state: previous fs state should not exist during upgrade. Finalize or rollback first.' 

This issue occurs because, during an upgrade, NameNode creates a different directory layout under NameNode directory, if this is stopped before Finalize upgrade, NameNode directory will go to an inconsistent state. NameNode will not be able to pick the directory with a different layout.

 

To resolve this issue start Namenode manually through command line with the -rollingUpgrade started option and proceed with the upgrade:

 

# su -l hdfs -c '/usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start namenode -rollingUpgrade started

Cheers!
Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.

View solution in original post

4 REPLIES 4

Hi Experts,

Please help here.I am stuck in between my upgrade.

Thanks and Regards,

Ananya

@Ananya_Misra This seems your curl command is not working. In this case there might be a possibility Curl installed in the system doesn't support. In this case,  curl command in the OS prompt  returned the following output:

[hdfs@namenode_hostname ~]$ curl --negotiate -u : -s "http://namenode_hostname:50070/jmx?qry=Hadoop:service=NameNode,name=FSNamesystem" 
curl: option --negotiate: the installed libcurl version doesn't support this 
curl: try 'curl --help' or 'curl --manual' for more information 

[hdfs@namenode_hostname data]# which curl

[hdfs@namenode_hostname data]# curl --version

 

So I would suggest you to manually run above curl command to see if that works and what is the output. 


So you might need to uninstall curl package in the nodes and and install the same from the default OS packages. Or change $PATH so that the OS default curl is used but make sure to verify with above command first.

 

Also if you are an entitled Cloudera Customer feel free to open a case with Cloudera so that you can get prompt assistance. 


Cheers!
Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.

Hi @GangWar ,

 

Thank you very much for your prompt response. However there is no issue with the curl in our servers.

 

I had investigated further and found out that the /var/log/hadoop/hdfs/namenode.log is indicating the following error.

2020-10-06 16:19:01,877 ERROR namenode.NameNode (NameNode.java:main(1715)) - Failed to start namenode.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /master/hadoop/hdfs/namenode is in an inconsistent state: previous fs state should not exist during upgrade. Finalize or rollback first.
at org.apache.hadoop.hdfs.server.namenode.FSImage.checkUpgrade(FSImage.java:411)
at org.apache.hadoop.hdfs.server.namenode.FSImage.checkUpgrade(FSImage.java:418)
at org.apache.hadoop.hdfs.server.namenode.FSImage.doUpgrade(FSImage.java:438)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:310)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1090)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:937)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:910)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
2020-10-06 16:19:01,879 INFO util.ExitUtil (ExitUtil.java:terminate(210)) - Exiting with status 1: org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /master/hadoop/hdfs/namenode is in an inconsistent state: previous fs state should not exist during upgrade. Finalize or rollback first.
2020-10-06 16:19:01,879 INFO timeline.HadoopTimelineMetricsSink (AbstractTimelineMetricsSink.java:getCurrentCollectorHost(291)) - No live collector to send metrics to. Metrics to be sent will be discarded. This message will be skipped for the next 20 times.
2020-10-06 16:19:01,880 INFO namenode.NameNode (LogAdapter.java:info(51)) - SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at H8CC0328.stb.gov.sg/10.21.21.85

 

Also I tried to Finalize my upgrade but finalize HDFS is failing.The error messages are attached.

 

Please help.

 

Thanks and Regards,

Ananya

@Ananya_Misra  Namenode restart failed with below exception :

 

'org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /hadoop/hdfs/namenode is in an inconsistent state: previous fs state should not exist during upgrade. Finalize or rollback first.' 

This issue occurs because, during an upgrade, NameNode creates a different directory layout under NameNode directory, if this is stopped before Finalize upgrade, NameNode directory will go to an inconsistent state. NameNode will not be able to pick the directory with a different layout.

 

To resolve this issue start Namenode manually through command line with the -rollingUpgrade started option and proceed with the upgrade:

 

# su -l hdfs -c '/usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start namenode -rollingUpgrade started

Cheers!
Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.

View solution in original post