Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

What is the HDFS, NameNode configuration for the JournalManager Timeout?

avatar
Explorer

@Mark Petronic and I are building out a QA and Production HA HDP 2.3.4.7 clusters. Our QA cluster is entirely on VMWare virtual machines.

We are having some problems with the underlying infrastructure that causes hosts to freeze for, at times up to 30 - 45 seconds. Yes, this is a totally separate problem and beyond the scope of the Hortonworks Community.

However, what I am trying to do is up the NameNode processes time out from 20000ms to see if we can alleviate this problem for the time-being.

What ends up happening is that once the NameNode times out attempting to connect to a quorum of JournalManager processes, it just shuts down.

2016-05-25 01:46:16,480 INFO  client.QuorumJournalManager (QuorumCall.java:waitFor(136)) - Waited 6001 ms (timeout=20000 ms) for a response for startLogSegment(416426). No responses yet.
2016-05-25 01:46:26,577 WARN  client.QuorumJournalManager (QuorumCall.java:waitFor(134)) - Waited 16098 ms (timeout=20000 ms) for a response for startLogSegment(416426). No responses yet.
2016-05-25 01:46:27,578 WARN  client.QuorumJournalManager (QuorumCall.java:waitFor(134)) - Waited 17099 ms (timeout=20000 ms) for a response for startLogSegment(416426). No responses yet.
2016-05-25 01:46:28,580 WARN  client.QuorumJournalManager (QuorumCall.java:waitFor(134)) - Waited 18100 ms (timeout=20000 ms) for a response for startLogSegment(416426). No responses yet.
2016-05-25 01:46:29,580 WARN  client.QuorumJournalManager (QuorumCall.java:waitFor(134)) - Waited 19101 ms (timeout=20000 ms) for a response for startLogSegment(416426). No responses yet.
2016-05-25 01:46:30,480 FATAL namenode.FSEditLog (JournalSet.java:mapJournalsAndReportErrors(398)) - Error: starting log segment 416426 failed for required journal (JournalAndStream(mgr=QJM to [172.19.64.30:8485, 172.19.64.31:8485, 172.19.64.32:8485], stream=null))
java.io.IOException: Timed out waiting 20000ms for a quorum of nodes to respond.
        at org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.waitForWriteQuorum(AsyncLoggerSet.java:137)
        at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.startLogSegment(QuorumJournalManager.java:403)
        at org.apache.hadoop.hdfs.server.namenode.JournalSet$JournalAndStream.startLogSegment(JournalSet.java:107)
        at org.apache.hadoop.hdfs.server.namenode.JournalSet$3.apply(JournalSet.java:222)
        at org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:393)
        at org.apache.hadoop.hdfs.server.namenode.JournalSet.startLogSegment(JournalSet.java:219)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:1237)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLog.rollEditLog(FSEditLog.java:1206)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.rollEditLog(FSImage.java:1297)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.rollEditLog(FSNamesystem.java:5939)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rollEditLog(NameNodeRpcServer.java:1186)
        at org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolServerSideTranslatorPB.rollEditLog(NamenodeProtocolServerSideTranslatorPB.java:142)
        at org.apache.hadoop.hdfs.protocol.proto.NamenodeProtocolProtos$NamenodeProtocolService$2.callBlockingMethod(NamenodeProtocolProtos.java:12025)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2147)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2145)
2016-05-25 01:46:30,483 INFO  util.ExitUtil (ExitUtil.java:terminate(124)) - Exiting with status 1
2016-05-25 01:46:30,487 INFO  provider.AuditProviderFactory (AuditProviderFactory.java:run(454)) - ==> JVMShutdownHook.run()
2016-05-25 01:46:30,487 INFO  provider.AuditProviderFactory (AuditProviderFactory.java:run(459)) - <== JVMShutdownHook.run()
2016-05-25 01:46:30,492 INFO  namenode.NameNode (LogAdapter.java:info(47)) - SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at nn01.qa.quasar.local/172.19.64.30
************************************************************/


Digging through the documentation I thought that the configuration was ipc.client.connect.timeout in core-site.xml, but that does not seem to be the case.

Does anyone know what the configuration parameter is, in which config file that I can update from the 20000ms default?

1 ACCEPTED SOLUTION

avatar
Guru

I believe it is using dfs.qjournal.start-segment.timeout.ms . Default for this is 20000.

However there are other configs as well that you may have to adjust like dfs.qjournal.write-txns.timeout.ms.

But, you are better off fixing your infrastructure issues than changes these default values.

View solution in original post

2 REPLIES 2

avatar
Guru

I believe it is using dfs.qjournal.start-segment.timeout.ms . Default for this is 20000.

However there are other configs as well that you may have to adjust like dfs.qjournal.write-txns.timeout.ms.

But, you are better off fixing your infrastructure issues than changes these default values.

avatar
Explorer

You are absolutely correct, that fixing the infrastructure issues is the correct solution, however doing so requires working with a number of other teams and will take quite some time to get sorted out. Luckily, it is in QA, so we can live with it.

Thank you very much for the hint. It seems that there are a number of properties that define how the NameNodes manage their various types of connections and timeouts to the JouralManagers.

The following is from org.apache.hadoop.hdfs.DFSConfigKeys.java

// Quorum-journal timeouts for various operations. Unlikely to need
// to be tweaked, but configurable just in case.
public static final String DFS_QJOURNAL_START_SEGMENT_TIMEOUT_KEY = "dfs.qjournal.start-segment.timeout.ms";
public static final String DFS_QJOURNAL_PREPARE_RECOVERY_TIMEOUT_KEY = "dfs.qjournal.prepare-recovery.timeout.ms";
public static final String DFS_QJOURNAL_ACCEPT_RECOVERY_TIMEOUT_KEY = "dfs.qjournal.accept-recovery.timeout.ms";
public static final String DFS_QJOURNAL_FINALIZE_SEGMENT_TIMEOUT_KEY = "dfs.qjournal.finalize-segment.timeout.ms";
public static final String DFS_QJOURNAL_SELECT_INPUT_STREAMS_TIMEOUT_KEY = "dfs.qjournal.select-input-streams.timeout.ms";
public static final String DFS_QJOURNAL_GET_JOURNAL_STATE_TIMEOUT_KEY = "dfs.qjournal.get-journal-state.timeout.ms";
public static final String DFS_QJOURNAL_NEW_EPOCH_TIMEOUT_KEY = "dfs.qjournal.new-epoch.timeout.ms";
public static final String DFS_QJOURNAL_WRITE_TXNS_TIMEOUT_KEY = "dfs.qjournal.write-txns.timeout.ms";
public static final int DFS_QJOURNAL_START_SEGMENT_TIMEOUT_DEFAULT = 20000;
public static final int DFS_QJOURNAL_PREPARE_RECOVERY_TIMEOUT_DEFAULT = 120000;
public static final int DFS_QJOURNAL_ACCEPT_RECOVERY_TIMEOUT_DEFAULT = 120000;
public static final int DFS_QJOURNAL_FINALIZE_SEGMENT_TIMEOUT_DEFAULT = 120000;
public static final int DFS_QJOURNAL_SELECT_INPUT_STREAMS_TIMEOUT_DEFAULT = 20000;
public static final int DFS_QJOURNAL_GET_JOURNAL_STATE_TIMEOUT_DEFAULT = 120000;
public static final int DFS_QJOURNAL_NEW_EPOCH_TIMEOUT_DEFAULT = 120000;
public static final int DFS_QJOURNAL_WRITE_TXNS_TIMEOUT_DEFAULT = 20000;

In my case, I added the following custom properties to hdfs-site.xml

dfs.qjournal.start-segment.timeout.ms = 90000
dfs.qjournal.select-input-streams.timeout.ms = 90000
dfs.qjournal.write-txns.timeout.ms = 90000

I also added the following property to core-site.xml

ipc.client.connect.timeout = 90000

So far, that seems to have alleviated the problem.