Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

ambari cluster + both namenode are standby

avatar

we start the services in our ambari cluster as the following ( after reboot )

42898-capture.png

1. start Zk

2. start journal-node

3. start name node ( on master01 machine and on master02 machine )


and we noticed that both name-node are stand by

how to force on of the node to became active ?

from log:

 tail -200 hadoop-hdfs-namenode-master03.sys65.com.log

rics to be sent will be discarded. This message will be skipped for the next 20 times.
2017-12-04 18:56:03,649 WARN  namenode.FSEditLog (JournalSet.java:selectInputStreams(280)) - Unable to determine input streams from QJM to [152.87.28.153:8485, 152.87.28.152:8485, 152.87.27.162:8485]. Skipping.
java.io.IOException: Timed out waiting 20000ms for a quorum of nodes to respond.
        at org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.waitForWriteQuorum(AsyncLoggerSet.java:137)
        at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectInputStreams(QuorumJournalManager.java:471)
        at org.apache.hadoop.hdfs.server.namenode.JournalSet.selectInputStreams(JournalSet.java:278)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1590)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1614)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:251)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:402)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$300(EditLogTailer.java:355)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:372)
        at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:476)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:368)
2017-12-04 18:56:03,650 INFO  namenode.FSNamesystem (FSNamesystem.java:writeUnlock(1658)) - FSNamesystem write lock held for 20005 ms via
java.lang.Thread.getStackTrace(Thread.java:1556)
org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:945)
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.writeUnlock(FSNamesystem.java:1658)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:285)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:402)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$300(EditLogTailer.java:355)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:372)
org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:476)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:368)
        Number of suppressed write-lock reports: 0
        Longest write-lock held interval: 20005


2017-12-04 19:03:43,792 INFO  ha.EditLogTailer (EditLogTailer.java:triggerActiveLogRoll(323)) - Triggering log roll on remote NameNode
2017-12-04 19:03:43,820 INFO  ha.EditLogTailer (EditLogTailer.java:triggerActiveLogRoll(334)) - Skipping log roll. Remote node is not in Active state: Operation category JOURNAL is not supported in state standby
2017-12-04 19:03:49,824 INFO  client.QuorumJournalManager (QuorumCall.java:waitFor(136)) - Waited 6001 ms (timeout=20000 ms) for a response for selectInputStreams. Succeeded so far:
2017-12-04 19:03:50,825 INFO  client.QuorumJournalManager (QuorumCall.java:waitFor(136)) - Waited 7003 ms (timeout=20000 ms) for a response for selectInputStreams. Succeeded so far:
You have mail in /var/spool/mail/root



capture.png
Michael-Bronson
1 ACCEPTED SOLUTION

avatar
Master Mentor
@Michael Bronson

We see the following error in your NameNode log:

2017-12-05 21:46:14,814 FATAL namenode.FSEditLog (JournalSet.java:mapJournalsAndReportErrors(398)) - Error: recoverUnfinalizedSegments failed for required journal (JournalAndStream(mgr=QJM to [10.164.28.153:8485, 10.164.28.152:8485, 10.164.27.162:8485], stream=null))
java.io.IOException: Timed out waiting 120000ms for a quorum of nodes to respond.
        at org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.waitForWriteQuorum(AsyncLoggerSet.java:137)


This indicates that the Journal Nodes have some issues and hence NameNode is not coming up.


This kind of error can be seen due to corruption with the 'edits_inprogress_xxxxxxx' file on the JournalNode.


So please check if the 'edits_inprogress_xxxxxxx' files are corrupt on the JournalNodes, those files needs to be removed.
Please move (or take a backup of) the corrupt "edits_inprogress" file to /tmp or copy the fsimage edits directory ("/hadoop/hdfs/journal/XXXXXXX/current") from a functioning JournalNode to this node and restart JournalNode and NameNode services. You can check the JournalNode logs of all 3 nodes to find out which JournalNode is functioning fine without errors.

.

View solution in original post

24 REPLIES 24

avatar
Expert Contributor

From the error message, it looks like some of the services might not be running. Can you please make sure that zookeeper and journal nodes are indeed running before starting NN?

avatar
Master Mentor

@Michael Bronson

Not resolved yet ?

avatar

yes still not both name node not startup or startup as standby

Michael-Bronson

avatar
Master Mentor

@Michael Bronson

Looks like you are using IP Addersses instead of FQDN (Hostnames) for your components.

Example:

 QJM to [152.87.28.153:8485, 152.87.28.152:8485, 152.87.27.162:8485]

.

Please make sure to use the Hostnames (FQDN) while defining the Address of your HDFS components. Do not use the IP Addresses.

Using proper FQDN (hostname -f) is one of the major requirement for HDFS cluster managed by Ambari.

https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.0.0/bk_ambari-installation-ppc/content/edit_the...

https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.0.0/bk_ambari-installation-ppc/content/set_the_...

https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.0.0/bk_ambari-installation-ppc/content/edit_the...

.

Also please check if your QJM processes are running fine on the mentioned hosts? Have the QJMs opened the port "8485" properly? Or you are noticing any error in the QJM logs?

# netstat -tnlpa | grep 8485
# tail -f /var/log/hadoop/hdfs/hadoop-hdfs-journalnode-xxxxxxxxxxxx.log 

.

avatar

yes we get that on all masters servers:

netstat -tnlpa | grep 8485 
tcp 0 0 0.0.0.0:8485 0.0.0.0:* LISTEN 14395/java
Michael-Bronson

avatar
Master Mentor

@Michael Bronson

Please check your hdfs-site and core-site configurations to confirm if you are using Hostname for the components instead of IP Address.

Also please double check that all the hostnames are in Lowercase. (Mixedcase or Uppercase hostnames will cause such issues). like "dfs.namenode.http-address" , dfs.namenode.http-address.$SERVICE_NAME.nn1 ..etc should be hostnames (not IP Address).

Also there should be not firewall issues while accessing the NameNode UI / JMX from ambari server host.

avatar

dear jay , I check all you said and seems its ok ( yes we use only host names in the xml file ) , about - JMX from ambari server host. - what we need to check here ?

second , I on this case more then two days , how we can debug it more deeply ?

Michael-Bronson

avatar

I found something

refernce - https://ambari.apache.org/1.2.3/installing-hadoop-using-ambari/content/reference_chap2_1.html

netstat -tnlpa | grep 50070
<br>no return any putout <br>

and also this api not return output<br>

curl -s 'http://<master>:50070/jmx?qry=Hadoop:service=NameNode,name=FSNamesystem'





Michael-Bronson

avatar
Master Mentor

@Michael Bronson

Based on the "netstat" output we can see that the port 50070 is not opened on the NameNode host, Which indicates that the NameNode might not have comeup successfully.

So please check the NameNode logs first to see if there are any errors that are causing the NameNode process to not come up clean .. or if there is any issue while opening the port 50070.

I will suggest , to put the NameNode log in "tail" mode and then restart the whole HDFS service from ambari UI.

.