<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: ambari cluster + both namenode are standby in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/ambari-cluster-both-namenode-are-standby/m-p/216640#M72244</link>
    <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/26229/uribarih.html" nodeid="26229"&gt;@Michael Bronson&lt;BR /&gt;&lt;/A&gt;&lt;/P&gt;&lt;P&gt;You are right that we need to focus first what block the port or why port not start.&lt;/P&gt;&lt;P&gt;In order to find that out we will need to see the NameNode logs to determine if there is any port conflict being logged Or if there are any error/exceptions which is causing the NameNode to not be able to open the port successfully.&lt;/P&gt;&lt;P&gt;   &lt;A rel="user" href="https://community.cloudera.com/users/26229/uribarih.html" nodeid="26229"&gt;&lt;/A&gt; &lt;/P&gt;</description>
    <pubDate>Wed, 06 Dec 2017 06:09:50 GMT</pubDate>
    <dc:creator>jsensharma</dc:creator>
    <dc:date>2017-12-06T06:09:50Z</dc:date>
    <item>
      <title>ambari cluster + both namenode are standby</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/ambari-cluster-both-namenode-are-standby/m-p/216621#M72225</link>
      <description>&lt;P&gt;we start the services in our ambari cluster as the following ( after reboot ) &lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="42898-capture.png" style="width: 410px;"&gt;&lt;img src="https://community.cloudera.com/t5/image/serverpage/image-id/16226i3117989715D27C01/image-size/medium?v=v2&amp;amp;px=400" role="button" title="42898-capture.png" alt="42898-capture.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;1. start Zk&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;2. start journal-node&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;3. start name node ( on master01 machine and on master02 machine )&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&lt;BR /&gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;and we noticed that both name-node are stand by&lt;/P&gt;&lt;P&gt;how to force on of the node to became active ? &lt;/P&gt;&lt;P&gt;from log:&lt;/P&gt;&lt;PRE&gt; tail -200 hadoop-hdfs-namenode-master03.sys65.com.log

rics to be sent will be discarded. This message will be skipped for the next 20 times.
2017-12-04 18:56:03,649 WARN  namenode.FSEditLog (JournalSet.java:selectInputStreams(280)) - Unable to determine input streams from QJM to [152.87.28.153:8485, 152.87.28.152:8485, 152.87.27.162:8485]. Skipping.
java.io.IOException: Timed out waiting 20000ms for a quorum of nodes to respond.
        at org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.waitForWriteQuorum(AsyncLoggerSet.java:137)
        at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectInputStreams(QuorumJournalManager.java:471)
        at org.apache.hadoop.hdfs.server.namenode.JournalSet.selectInputStreams(JournalSet.java:278)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1590)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1614)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:251)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:402)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$300(EditLogTailer.java:355)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:372)
        at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:476)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:368)
2017-12-04 18:56:03,650 INFO  namenode.FSNamesystem (FSNamesystem.java:writeUnlock(1658)) - FSNamesystem write lock held for 20005 ms via
java.lang.Thread.getStackTrace(Thread.java:1556)
org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:945)
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.writeUnlock(FSNamesystem.java:1658)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:285)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:402)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$300(EditLogTailer.java:355)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:372)
org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:476)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:368)
        Number of suppressed write-lock reports: 0
        Longest write-lock held interval: 20005


2017-12-04 19:03:43,792 INFO  ha.EditLogTailer (EditLogTailer.java:triggerActiveLogRoll(323)) - Triggering log roll on remote NameNode
2017-12-04 19:03:43,820 INFO  ha.EditLogTailer (EditLogTailer.java:triggerActiveLogRoll(334)) - Skipping log roll. Remote node is not in Active state: Operation category JOURNAL is not supported in state standby
2017-12-04 19:03:49,824 INFO  client.QuorumJournalManager (QuorumCall.java:waitFor(136)) - Waited 6001 ms (timeout=20000 ms) for a response for selectInputStreams. Succeeded so far:
2017-12-04 19:03:50,825 INFO  client.QuorumJournalManager (QuorumCall.java:waitFor(136)) - Waited 7003 ms (timeout=20000 ms) for a response for selectInputStreams. Succeeded so far:
You have mail in /var/spool/mail/root


&lt;/PRE&gt;&lt;BR /&gt;&lt;IMG src="https://community.cloudera.com/t5/image/serverpage/image-id/6494i792C0CA683D9759E/image-size/large?v=1.0&amp;amp;px=999" border="0" alt="capture.png" title="capture.png" /&gt;</description>
      <pubDate>Sun, 18 Aug 2019 03:07:55 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/ambari-cluster-both-namenode-are-standby/m-p/216621#M72225</guid>
      <dc:creator>mike_bronson7</dc:creator>
      <dc:date>2019-08-18T03:07:55Z</dc:date>
    </item>
    <item>
      <title>Re: ambari cluster + both namenode are standby</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/ambari-cluster-both-namenode-are-standby/m-p/216622#M72226</link>
      <description>&lt;P&gt;From the error message, it looks like some of the services might not be running. Can you please make sure that zookeeper and journal nodes are indeed running before starting NN?&lt;/P&gt;</description>
      <pubDate>Wed, 06 Dec 2017 02:18:25 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/ambari-cluster-both-namenode-are-standby/m-p/216622#M72226</guid>
      <dc:creator>aengineer</dc:creator>
      <dc:date>2017-12-06T02:18:25Z</dc:date>
    </item>
    <item>
      <title>Re: ambari cluster + both namenode are standby</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/ambari-cluster-both-namenode-are-standby/m-p/216623#M72227</link>
      <description>&lt;P&gt;&lt;EM&gt;@&lt;A href="https://community.hortonworks.com/users/26229/uribarih.html"&gt;Michael Bronson&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Not resolved yet ?&lt;/EM&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 06 Dec 2017 02:35:31 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/ambari-cluster-both-namenode-are-standby/m-p/216623#M72227</guid>
      <dc:creator>Shelton</dc:creator>
      <dc:date>2017-12-06T02:35:31Z</dc:date>
    </item>
    <item>
      <title>Re: ambari cluster + both namenode are standby</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/ambari-cluster-both-namenode-are-standby/m-p/216624#M72228</link>
      <description>&lt;P&gt;yes still not both name node not startup or startup as standby&lt;/P&gt;</description>
      <pubDate>Wed, 06 Dec 2017 02:40:09 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/ambari-cluster-both-namenode-are-standby/m-p/216624#M72228</guid>
      <dc:creator>mike_bronson7</dc:creator>
      <dc:date>2017-12-06T02:40:09Z</dc:date>
    </item>
    <item>
      <title>Re: ambari cluster + both namenode are standby</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/ambari-cluster-both-namenode-are-standby/m-p/216625#M72229</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/26229/uribarih.html" nodeid="26229"&gt;@Michael Bronson&lt;/A&gt;&lt;/P&gt;&lt;P&gt;Looks like you are using IP Addersses instead of FQDN (Hostnames) for your components.&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Example:&lt;/STRONG&gt;&lt;/P&gt;&lt;PRE&gt; QJM to [152.87.28.153:8485, 152.87.28.152:8485, 152.87.27.162:8485]&lt;/PRE&gt;&lt;P&gt;.&lt;/P&gt;&lt;P&gt;Please make sure to use the Hostnames (FQDN) while defining the Address of your HDFS components.   Do not use the IP Addresses.  &lt;/P&gt;&lt;P&gt;Using proper FQDN (hostname -f) is one of the major requirement for HDFS cluster managed by Ambari.&lt;/P&gt;&lt;P&gt;&lt;A href="https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.0.0/bk_ambari-installation-ppc/content/edit_the_host_file.html" target="_blank"&gt;https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.0.0/bk_ambari-installation-ppc/content/edit_the_host_file.html&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&lt;A href="https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.0.0/bk_ambari-installation-ppc/content/set_the_hostname.html" target="_blank"&gt;https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.0.0/bk_ambari-installation-ppc/content/set_the_hostname.html&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&lt;A href="https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.0.0/bk_ambari-installation-ppc/content/edit_the_network_configuration_file.html" target="_blank"&gt;https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.0.0/bk_ambari-installation-ppc/content/edit_the_network_configuration_file.html&lt;/A&gt;&lt;/P&gt;&lt;P&gt;.&lt;/P&gt;&lt;P&gt;Also please check if your QJM processes are running fine on the mentioned hosts?  Have the QJMs opened the port "8485" properly?   Or you are noticing any error in the QJM logs?&lt;/P&gt;&lt;PRE&gt;# netstat -tnlpa | grep 8485
# tail -f /var/log/hadoop/hdfs/hadoop-hdfs-journalnode-xxxxxxxxxxxx.log &lt;/PRE&gt;&lt;P&gt;.&lt;/P&gt;</description>
      <pubDate>Wed, 06 Dec 2017 02:56:48 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/ambari-cluster-both-namenode-are-standby/m-p/216625#M72229</guid>
      <dc:creator>jsensharma</dc:creator>
      <dc:date>2017-12-06T02:56:48Z</dc:date>
    </item>
    <item>
      <title>Re: ambari cluster + both namenode are standby</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/ambari-cluster-both-namenode-are-standby/m-p/216626#M72230</link>
      <description>&lt;P&gt;yes we get that on all masters servers:&lt;/P&gt;&lt;PRE&gt;netstat -tnlpa | grep 8485 &lt;BR /&gt;tcp        0      0 0.0.0.0:8485            0.0.0.0:*               LISTEN      14395/java&lt;/PRE&gt;</description>
      <pubDate>Wed, 06 Dec 2017 03:17:24 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/ambari-cluster-both-namenode-are-standby/m-p/216626#M72230</guid>
      <dc:creator>mike_bronson7</dc:creator>
      <dc:date>2017-12-06T03:17:24Z</dc:date>
    </item>
    <item>
      <title>Re: ambari cluster + both namenode are standby</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/ambari-cluster-both-namenode-are-standby/m-p/216627#M72231</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/26229/uribarih.html" nodeid="26229"&gt;@Michael Bronson&lt;BR /&gt;&lt;/A&gt;&lt;/P&gt;&lt;P&gt;Please check your hdfs-site and core-site configurations to confirm if you are using Hostname for the components instead of IP Address.&lt;/P&gt;&lt;P style="margin-left: 20px;"&gt;Also please double check that all the hostnames are in Lowercase.  (Mixedcase or Uppercase hostnames will cause such issues).  like "dfs.namenode.http-address" , dfs.namenode.http-address.$SERVICE_NAME.nn1 ..etc should be hostnames (not IP Address).&lt;/P&gt;&lt;P style="margin-left: 20px;"&gt;Also there should be not firewall issues while accessing the NameNode UI / JMX from ambari server host.&lt;BR /&gt;&lt;A rel="user" href="https://community.cloudera.com/users/26229/uribarih.html" nodeid="26229"&gt;&lt;/A&gt; &lt;/P&gt;</description>
      <pubDate>Wed, 06 Dec 2017 03:37:35 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/ambari-cluster-both-namenode-are-standby/m-p/216627#M72231</guid>
      <dc:creator>jsensharma</dc:creator>
      <dc:date>2017-12-06T03:37:35Z</dc:date>
    </item>
    <item>
      <title>Re: ambari cluster + both namenode are standby</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/ambari-cluster-both-namenode-are-standby/m-p/216628#M72232</link>
      <description>&lt;P&gt;dear jay , I check all you said and seems its ok ( yes we use only host names in the xml file ) , about - JMX from ambari server host. - what we need to check here ?  &lt;/P&gt;&lt;P&gt;second , I on this case more then two days , how we can debug it more deeply ? &lt;/P&gt;</description>
      <pubDate>Wed, 06 Dec 2017 04:56:41 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/ambari-cluster-both-namenode-are-standby/m-p/216628#M72232</guid>
      <dc:creator>mike_bronson7</dc:creator>
      <dc:date>2017-12-06T04:56:41Z</dc:date>
    </item>
    <item>
      <title>Re: ambari cluster + both namenode are standby</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/ambari-cluster-both-namenode-are-standby/m-p/216629#M72233</link>
      <description>&lt;P&gt;I found something &lt;/P&gt;&lt;P&gt;refernce - &lt;A href="https://ambari.apache.org/1.2.3/installing-hadoop-using-ambari/content/reference_chap2_1.html" target="_blank"&gt;https://ambari.apache.org/1.2.3/installing-hadoop-using-ambari/content/reference_chap2_1.html&lt;/A&gt;&lt;/P&gt;&lt;PRE&gt;netstat -tnlpa | grep 50070
&amp;lt;br&amp;gt;no return any putout &amp;lt;br&amp;gt;

and also this api not return output&amp;lt;br&amp;gt;

curl -s 'http://&amp;lt;master&amp;gt;:50070/jmx?qry=Hadoop:service=NameNode,name=FSNamesystem'





&lt;/PRE&gt;</description>
      <pubDate>Wed, 06 Dec 2017 05:16:52 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/ambari-cluster-both-namenode-are-standby/m-p/216629#M72233</guid>
      <dc:creator>mike_bronson7</dc:creator>
      <dc:date>2017-12-06T05:16:52Z</dc:date>
    </item>
    <item>
      <title>Re: ambari cluster + both namenode are standby</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/ambari-cluster-both-namenode-are-standby/m-p/216630#M72234</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/26229/uribarih.html" nodeid="26229"&gt;@Michael Bronson&lt;BR /&gt;&lt;/A&gt;&lt;/P&gt;&lt;P&gt;Based on the "netstat" output we can see that the port 50070 is not opened on the NameNode host,  Which indicates that the NameNode might not have comeup successfully.&lt;/P&gt;&lt;P&gt;So please check the NameNode logs first to see if there are any errors that are causing the NameNode process to not come up clean .. or if there is any issue while opening the port 50070.&lt;/P&gt;&lt;P&gt;I will suggest , to put the NameNode log in "tail" mode and then restart the whole HDFS service from ambari UI.&lt;/P&gt;&lt;P&gt;.&lt;BR /&gt;&lt;A rel="user" href="https://community.cloudera.com/users/26229/uribarih.html" nodeid="26229"&gt;&lt;/A&gt; &lt;/P&gt;</description>
      <pubDate>Wed, 06 Dec 2017 05:22:34 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/ambari-cluster-both-namenode-are-standby/m-p/216630#M72234</guid>
      <dc:creator>jsensharma</dc:creator>
      <dc:date>2017-12-06T05:22:34Z</dc:date>
    </item>
    <item>
      <title>Re: ambari cluster + both namenode are standby</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/ambari-cluster-both-namenode-are-standby/m-p/216631#M72235</link>
      <description>&lt;P&gt;how to put the NameNode log in "tail" mode ? &lt;/P&gt;&lt;P&gt;second how to force the port to start? &lt;/P&gt;</description>
      <pubDate>Wed, 06 Dec 2017 05:32:45 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/ambari-cluster-both-namenode-are-standby/m-p/216631#M72235</guid>
      <dc:creator>mike_bronson7</dc:creator>
      <dc:date>2017-12-06T05:32:45Z</dc:date>
    </item>
    <item>
      <title>Re: ambari cluster + both namenode are standby</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/ambari-cluster-both-namenode-are-standby/m-p/216632#M72236</link>
      <description>&lt;P&gt;from the log I can see that&lt;/P&gt;&lt;PRE&gt;Getting jmx metrics from NN failed. URL: &lt;A href="http://&amp;lt;master&amp;gt;:50070/jmx?qry=Hadoop:service=NameNode,name=FSNamesystem" target="_blank"&gt;http://&amp;lt;master&amp;gt;:50070/jmx?qry=Hadoop:service=NameNode,name=FSNamesystem&lt;/A&gt;
Traceback (most recent call last):&lt;/PRE&gt;</description>
      <pubDate>Wed, 06 Dec 2017 05:37:00 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/ambari-cluster-both-namenode-are-standby/m-p/216632#M72236</guid>
      <dc:creator>mike_bronson7</dc:creator>
      <dc:date>2017-12-06T05:37:00Z</dc:date>
    </item>
    <item>
      <title>Re: ambari cluster + both namenode are standby</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/ambari-cluster-both-namenode-are-standby/m-p/216633#M72237</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/26229/uribarih.html" nodeid="26229"&gt;@Michael Bronson&lt;BR /&gt;&lt;/A&gt;&lt;/P&gt;&lt;P&gt;Inside your NameNode host you will find some file name as "" where you need to enable the tail as following&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Example:&lt;/STRONG&gt;&lt;/P&gt;&lt;PRE&gt;# tail -f /var/log/hadoop/hdfs/hadoop-hdfs-namenode-xxxxxxxxxxxxxx.log&lt;/PRE&gt;&lt;P&gt;.&lt;/P&gt;&lt;P&gt;The JMX URL shows that the jmx metrics from NN failed because the port 50070 seems to be down.&lt;/P&gt;&lt;P&gt;.&lt;/P&gt;&lt;P&gt;Regarding your query: "how to force the port to start?"&lt;/P&gt;&lt;P&gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt; The only way to make sure that the port is opened properly is to ensure that the NameNode starts fine without any error.   So please check the NameNode log to see if there is any error? &lt;/P&gt;&lt;P&gt;.&lt;BR /&gt;&lt;A rel="user" href="https://community.cloudera.com/users/26229/uribarih.html" nodeid="26229"&gt;&lt;/A&gt; &lt;/P&gt;</description>
      <pubDate>Wed, 06 Dec 2017 05:41:06 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/ambari-cluster-both-namenode-are-standby/m-p/216633#M72237</guid>
      <dc:creator>jsensharma</dc:creator>
      <dc:date>2017-12-06T05:41:06Z</dc:date>
    </item>
    <item>
      <title>Re: ambari cluster + both namenode are standby</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/ambari-cluster-both-namenode-are-standby/m-p/216634#M72238</link>
      <description>&lt;P&gt;the errors are&lt;/P&gt;&lt;P&gt;so how from these erros we can understand why port is down ?&lt;/P&gt;&lt;PRE&gt;    org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:393)
2017-12-05 20:33:23,716 FATAL namenode.FSEditLog (JournalSet.java:mapJournalsAndReportErrors(398)) - Error: recoverUnfinalizedSegments failed for required journal (JournalAndStream(mgr=QJM to [34.98.28.153:8485, 34.98.28.152:8485, 34.98.27.162:8485], stream=null))
        at org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:393)
    org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:393)
2017-12-05 21:03:41,334 FATAL namenode.FSEditLog (JournalSet.java:mapJournalsAndReportErrors(398)) - Error: recoverUnfinalizedSegments failed for required journal (JournalAndStream(mgr=QJM to [34.98.28.153:8485, 34.98.28.152:8485, 34.98.27.162:8485], stream=null))
        at org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:393)&lt;/PRE&gt;</description>
      <pubDate>Wed, 06 Dec 2017 05:53:15 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/ambari-cluster-both-namenode-are-standby/m-p/216634#M72238</guid>
      <dc:creator>mike_bronson7</dc:creator>
      <dc:date>2017-12-06T05:53:15Z</dc:date>
    </item>
    <item>
      <title>Re: ambari cluster + both namenode are standby</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/ambari-cluster-both-namenode-are-standby/m-p/216635#M72239</link>
      <description>&lt;P&gt;the errors are&lt;/P&gt;&lt;PRE&gt;2017-12-05 21:46:14,814 FATAL namenode.FSEditLog (JournalSet.java:mapJournalsAndReportErrors(398)) - Error: recoverUnfinalizedSegments failed for required journal (JournalAndStream(mgr=QJM to [100.164.28.153:8485, 100.164.28.152:8485, 100.164.27.162:8485], stream=null))
        at org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:393)&lt;/PRE&gt;&lt;P&gt;I also checked that&lt;/P&gt;&lt;PRE&gt;&amp;lt;br&amp;gt;telnet localhost 50070
Trying ::1...
telnet: connect to address ::1: Connection refused
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused&lt;/PRE&gt;</description>
      <pubDate>Wed, 06 Dec 2017 05:56:49 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/ambari-cluster-both-namenode-are-standby/m-p/216635#M72239</guid>
      <dc:creator>mike_bronson7</dc:creator>
      <dc:date>2017-12-06T05:56:49Z</dc:date>
    </item>
    <item>
      <title>Re: ambari cluster + both namenode are standby</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/ambari-cluster-both-namenode-are-standby/m-p/216636#M72240</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/26229/uribarih.html" nodeid="26229"&gt;@Michael Bronson&lt;BR /&gt;&lt;/A&gt;&lt;/P&gt;&lt;P&gt;As the JournalNodes are not running as well as the Zookeper Failover Controllers (ZKFC) hence please restart those components first.&lt;/P&gt;&lt;P&gt;It will be best to try restarting the whole HDFS service from ambari UI.&lt;/P&gt;&lt;P&gt;Ambari UI --. HDFS --" Service Actions" (Drop Down) --&amp;gt; Restart All&lt;/P&gt;&lt;P&gt;Then please check if all components comes up fine or not? &lt;/P&gt;&lt;P&gt;Please share the &lt;STRONG&gt;complete logs&lt;/STRONG&gt;  of all the components (like NameNode, JournalNodes, ZKFS logs) which fails to restart successfully.&lt;/P&gt;&lt;P&gt;.&lt;BR /&gt;&lt;A rel="user" href="https://community.cloudera.com/users/26229/uribarih.html" nodeid="26229"&gt;&lt;/A&gt; &lt;/P&gt;</description>
      <pubDate>Wed, 06 Dec 2017 05:57:15 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/ambari-cluster-both-namenode-are-standby/m-p/216636#M72240</guid>
      <dc:creator>jsensharma</dc:creator>
      <dc:date>2017-12-06T05:57:15Z</dc:date>
    </item>
    <item>
      <title>Re: ambari cluster + both namenode are standby</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/ambari-cluster-both-namenode-are-standby/m-p/216637#M72241</link>
      <description>&lt;P&gt;the picture for now is ( JournalNodes are  running as well as the Zookeper Failover Controllers are running also )&lt;/P&gt;&lt;P&gt;second we perfrm more then twice full restart but without results &lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="42942-capture.png" style="width: 504px;"&gt;&lt;img src="https://community.cloudera.com/t5/image/serverpage/image-id/16225i4913FEE4070BF006/image-size/medium?v=v2&amp;amp;px=400" role="button" title="42942-capture.png" alt="42942-capture.png" /&gt;&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Sun, 18 Aug 2019 03:07:48 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/ambari-cluster-both-namenode-are-standby/m-p/216637#M72241</guid>
      <dc:creator>mike_bronson7</dc:creator>
      <dc:date>2019-08-18T03:07:48Z</dc:date>
    </item>
    <item>
      <title>Re: ambari cluster + both namenode are standby</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/ambari-cluster-both-namenode-are-standby/m-p/216638#M72242</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/26229/uribarih.html" nodeid="26229"&gt;@Michael Bronson&lt;/A&gt;&lt;/P&gt;&lt;P&gt;Good to see that now the Both ZKFC, All 3 JournalNodes and all 4 DataNodes are running (are green).&lt;/P&gt;&lt;P&gt;Regarding Both NameNode Down.  We will need to investigate the NameNode logs in order to findout why they are not running.&lt;/P&gt;&lt;P&gt;So can you please share/attach the complete NN logs.&lt;/P&gt;</description>
      <pubDate>Wed, 06 Dec 2017 06:05:19 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/ambari-cluster-both-namenode-are-standby/m-p/216638#M72242</guid>
      <dc:creator>jsensharma</dc:creator>
      <dc:date>2017-12-06T06:05:19Z</dc:date>
    </item>
    <item>
      <title>Re: ambari cluster + both namenode are standby</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/ambari-cluster-both-namenode-are-standby/m-p/216639#M72243</link>
      <description>&lt;P&gt;@Jay maybe we need to focus first what block the port or why port not start&lt;/P&gt;</description>
      <pubDate>Wed, 06 Dec 2017 06:07:19 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/ambari-cluster-both-namenode-are-standby/m-p/216639#M72243</guid>
      <dc:creator>mike_bronson7</dc:creator>
      <dc:date>2017-12-06T06:07:19Z</dc:date>
    </item>
    <item>
      <title>Re: ambari cluster + both namenode are standby</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/ambari-cluster-both-namenode-are-standby/m-p/216640#M72244</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/26229/uribarih.html" nodeid="26229"&gt;@Michael Bronson&lt;BR /&gt;&lt;/A&gt;&lt;/P&gt;&lt;P&gt;You are right that we need to focus first what block the port or why port not start.&lt;/P&gt;&lt;P&gt;In order to find that out we will need to see the NameNode logs to determine if there is any port conflict being logged Or if there are any error/exceptions which is causing the NameNode to not be able to open the port successfully.&lt;/P&gt;&lt;P&gt;   &lt;A rel="user" href="https://community.cloudera.com/users/26229/uribarih.html" nodeid="26229"&gt;&lt;/A&gt; &lt;/P&gt;</description>
      <pubDate>Wed, 06 Dec 2017 06:09:50 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/ambari-cluster-both-namenode-are-standby/m-p/216640#M72244</guid>
      <dc:creator>jsensharma</dc:creator>
      <dc:date>2017-12-06T06:09:50Z</dc:date>
    </item>
  </channel>
</rss>

