<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Failed to start namenode. java.io.IOException: Timed out waiting for getJournalCTime() response in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/Failed-to-start-namenode-java-io-IOException-Timed-out/m-p/289750#M214440</link>
    <description>&lt;P&gt;Solved by copying &lt;CODE&gt;/dfs/jn&lt;/CODE&gt; folder from master01.ib (one nodes in sync) to master03.ib.&lt;/P&gt;</description>
    <pubDate>Fri, 14 Feb 2020 00:23:52 GMT</pubDate>
    <dc:creator>astappiev</dc:creator>
    <dc:date>2020-02-14T00:23:52Z</dc:date>
    <item>
      <title>Failed to start namenode. java.io.IOException: Timed out waiting for getJournalCTime() response</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Failed-to-start-namenode-java-io-IOException-Timed-out/m-p/289736#M214434</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I'm upgrading CDH from 5.13.0 to 6.3.1 and can not proceed after "&lt;SPAN&gt;Upgrade HDFS Metadata&lt;/SPAN&gt;".&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;On a 3rd step, after "Starting the JournalNodes." and "&lt;SPAN&gt;Starting metadata upgrade on Active NameNode of nameservice nameservice1.&lt;/SPAN&gt;". I cannot proceed with "&lt;SPAN&gt;Waiting for NameNode (master02) to start responding to RPCs.".&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;It just freezes, but according to log records, it will not continue because the process failed. And it is failed because of a quorum of JournalNodes is not succeeded. master03.ib (10.12.0.3) is not responding.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;What can I do? What can cause the issue? Can I run following steps manually?&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;The logs says the following&lt;/SPAN&gt;&lt;/P&gt;
&lt;LI-SPOILER&gt;
&lt;P&gt;6:43:03.610 PM WARN QuorumJournalManager&lt;/P&gt;
&lt;P&gt;Waited 55044 ms (timeout=60000 ms) for a response for getJournalCTime. Succeeded so far: [10.12.0.2:8485,10.12.0.1:8485]&lt;/P&gt;
&lt;P&gt;6:43:04.611 PM WARN QuorumJournalManager&lt;/P&gt;
&lt;P&gt;Waited 56045 ms (timeout=60000 ms) for a response for getJournalCTime. Succeeded so far: [10.12.0.2:8485,10.12.0.1:8485]&lt;/P&gt;
&lt;P&gt;6:43:05.611 PM WARN QuorumJournalManager&lt;/P&gt;
&lt;P&gt;Waited 57046 ms (timeout=60000 ms) for a response for getJournalCTime. Succeeded so far: [10.12.0.2:8485,10.12.0.1:8485]&lt;/P&gt;
&lt;P&gt;6:43:06.613 PM WARN QuorumJournalManager&lt;/P&gt;
&lt;P&gt;Waited 58047 ms (timeout=60000 ms) for a response for getJournalCTime. Succeeded so far: [10.12.0.2:8485,10.12.0.1:8485]&lt;/P&gt;
&lt;P&gt;6:43:07.614 PM WARN QuorumJournalManager&lt;/P&gt;
&lt;P&gt;Waited 59048 ms (timeout=60000 ms) for a response for getJournalCTime. Succeeded so far: [10.12.0.2:8485,10.12.0.1:8485]&lt;/P&gt;
&lt;P&gt;6:43:08.673 PM INFO FSNamesystem&lt;/P&gt;
&lt;P&gt;FSNamesystem write lock held for 60244 ms via&lt;BR /&gt;java.lang.Thread.getStackTrace(Thread.java:1559)&lt;BR /&gt;org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:1032)&lt;BR /&gt;org.apache.hadoop.hdfs.server.namenode.FSNamesystemLock.writeUnlock(FSNamesystemLock.java:263)&lt;BR /&gt;org.apache.hadoop.hdfs.server.namenode.FSNamesystem.writeUnlock(FSNamesystem.java:1604)&lt;BR /&gt;org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1111)&lt;BR /&gt;org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:709)&lt;BR /&gt;org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:665)&lt;BR /&gt;org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:727)&lt;BR /&gt;org.apache.hadoop.hdfs.server.namenode.NameNode.&amp;lt;init&amp;gt;(NameNode.java:950)&lt;BR /&gt;org.apache.hadoop.hdfs.server.namenode.NameNode.&amp;lt;init&amp;gt;(NameNode.java:929)&lt;BR /&gt;org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1653)&lt;BR /&gt;org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1720)&lt;BR /&gt;Number of suppressed write-lock reports: 0&lt;BR /&gt;Longest write-lock held interval: 60244&lt;/P&gt;
&lt;P&gt;6:43:08.675 PM WARN FSNamesystem&lt;/P&gt;
&lt;P&gt;Encountered exception loading fsimage&lt;BR /&gt;java.io.IOException: Timed out waiting for getJournalCTime() response&lt;BR /&gt;at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.getJournalCTime(QuorumJournalManager.java:678)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSEditLog.getSharedLogCTime(FSEditLog.java:1613)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSImage.initEditLog(FSImage.java:829)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:683)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSImage.doUpgrade(FSImage.java:443)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:310)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1084)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:709)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:665)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:727)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.NameNode.&amp;lt;init&amp;gt;(NameNode.java:950)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.NameNode.&amp;lt;init&amp;gt;(NameNode.java:929)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1653)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1720)&lt;/P&gt;
&lt;P&gt;6:43:08.698 PM INFO ContextHandler&lt;/P&gt;
&lt;P&gt;Stopped o.e.j.w.WebAppContext@52045dbe{/,null,UNAVAILABLE}{/hdfs}&lt;/P&gt;
&lt;P&gt;6:43:08.704 PM INFO AbstractConnector&lt;/P&gt;
&lt;P&gt;Stopped ServerConnector@34997338{HTTP/1.1,[http/1.1]}{master02.ib:9870}&lt;/P&gt;
&lt;P&gt;6:43:08.705 PM INFO ContextHandler&lt;/P&gt;
&lt;P&gt;Stopped o.e.j.s.ServletContextHandler@4d722ac9{/static,file:///opt/cloudera/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554/lib/hadoop-hdfs/webapps/static/,UNAVAILABLE}&lt;/P&gt;
&lt;P&gt;6:43:08.705 PM INFO ContextHandler&lt;/P&gt;
&lt;P&gt;Stopped o.e.j.s.ServletContextHandler@2320fa6f{/logs,file:///var/log/hadoop-hdfs/,UNAVAILABLE}&lt;/P&gt;
&lt;P&gt;6:43:08.709 PM INFO MetricsSystemImpl&lt;/P&gt;
&lt;P&gt;Stopping NameNode metrics system...&lt;/P&gt;
&lt;P&gt;6:43:08.710 PM INFO MetricsSystemImpl&lt;/P&gt;
&lt;P&gt;NameNode metrics system stopped.&lt;/P&gt;
&lt;P&gt;6:43:08.710 PM INFO MetricsSystemImpl&lt;/P&gt;
&lt;P&gt;NameNode metrics system shutdown complete.&lt;/P&gt;
&lt;P&gt;6:43:08.711 PM ERROR NameNode&lt;/P&gt;
&lt;P&gt;Failed to start namenode.&lt;BR /&gt;java.io.IOException: Timed out waiting for getJournalCTime() response&lt;BR /&gt;at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.getJournalCTime(QuorumJournalManager.java:678)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSEditLog.getSharedLogCTime(FSEditLog.java:1613)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSImage.initEditLog(FSImage.java:829)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:683)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSImage.doUpgrade(FSImage.java:443)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:310)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1084)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:709)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:665)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:727)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.NameNode.&amp;lt;init&amp;gt;(NameNode.java:950)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.NameNode.&amp;lt;init&amp;gt;(NameNode.java:929)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1653)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1720)&lt;/P&gt;
&lt;P&gt;6:43:08.714 PM INFO ExitUtil&lt;/P&gt;
&lt;P&gt;Exiting with status 1: java.io.IOException: Timed out waiting for getJournalCTime() response&lt;/P&gt;
&lt;P&gt;6:43:08.717 PM INFO NameNode&lt;/P&gt;
&lt;P&gt;SHUTDOWN_MSG:&lt;BR /&gt;/************************************************************&lt;BR /&gt;SHUTDOWN_MSG: Shutting down NameNode at master02.ib/10.12.0.2&lt;BR /&gt;************************************************************/&lt;/P&gt;
&lt;/LI-SPOILER&gt;
&lt;P&gt;Thanks and best regards,&lt;/P&gt;
&lt;P&gt;Oleh&lt;/P&gt;</description>
      <pubDate>Thu, 13 Feb 2020 18:00:48 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Failed-to-start-namenode-java-io-IOException-Timed-out/m-p/289736#M214434</guid>
      <dc:creator>astappiev</dc:creator>
      <dc:date>2020-02-13T18:00:48Z</dc:date>
    </item>
    <item>
      <title>Re: Failed to start namenode. java.io.IOException: Timed out waiting for getJournalCTime() response</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Failed-to-start-namenode-java-io-IOException-Timed-out/m-p/289750#M214440</link>
      <description>&lt;P&gt;Solved by copying &lt;CODE&gt;/dfs/jn&lt;/CODE&gt; folder from master01.ib (one nodes in sync) to master03.ib.&lt;/P&gt;</description>
      <pubDate>Fri, 14 Feb 2020 00:23:52 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Failed-to-start-namenode-java-io-IOException-Timed-out/m-p/289750#M214440</guid>
      <dc:creator>astappiev</dc:creator>
      <dc:date>2020-02-14T00:23:52Z</dc:date>
    </item>
  </channel>
</rss>

