Member since
03-06-2017
4
Posts
0
Kudos Received
0
Solutions
07-07-2017
12:26 PM
We setup namenode HA with manual failover, 3 journal nodes. The 3 journal nodes are setup in such a way that each namenode server runs one journal node and another journal node runs on a slave node. I often see active namenode crashing when it attempts to create new log segment. I encounters time out from multiple journal nodes causing it to crash. Journal nodes report the following exception : ##### journal node 1 : 2017-07-06 20:51:37,735 WARN org.apache.hadoop.ipc.Server: IPC Server handler 2 on 8485, call org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol.startLogSegment from 192.XX.XXX.98:56127 Call#7626920 Retry#0: output error 2017-07-06 20:51:37,736 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 8485 caught an exception java.nio.channels.ClosedChannelException at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:270) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:461) at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2621) at org.apache.hadoop.ipc.Server.access$1900(Server.java:134) at org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:989) at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:1054) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2141) ###### Journal node 2: 2017-07-06 20:51:43,961 WARN org.apache.hadoop.ipc.Server: IPC Server handler 0 on 8485, call org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol.startLogSegment from 192.XX.XXX.98:56757 Call#7626921 Retry#0: output error 2017-07-06 20:51:44,119 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 8485 caught an exception java.nio.channels.ClosedChannelException at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:270) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:461) at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2621) at org.apache.hadoop.ipc.Server.access$1900(Server.java:134) at org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:989) at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:1054) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2141) #### 2017-07-06 20:51:07,678 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 1193538431 2017-07-06 20:51:13,679 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 6001 ms (timeout=20000 ms) for a response for startLogSegment(1193538431). Succeeded so far: [192.XX.XXX.10 0:8485] 2017-07-06 20:51:14,679 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 7002 ms (timeout=20000 ms) for a response for startLogSegment(1193538431). Succeeded so far: [192.XX.XXX.10 0:8485] 2017-07-06 20:51:15,680 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 8003 ms (timeout=20000 ms) for a response for startLogSegment(1193538431). Succeeded so far: [192.XX.XXX.10 0:8485] 2017-07-06 20:51:16,682 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 9004 ms (timeout=20000 ms) for a response for startLogSegment(1193538431). Succeeded so far: [192.XX.XXX.10 0:8485] 2017-07-06 20:51:17,683 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 10005 ms (timeout=20000 ms) for a response for startLogSegment(1193538431). Succeeded so far: [192.XX.XXX.1 00:8485] 2017-07-06 20:51:18,684 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 11007 ms (timeout=20000 ms) for a response for startLogSegment(1193538431). Succeeded so far: [192.XX.XXX.1 00:8485] 2017-07-06 20:51:19,685 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 12008 ms (timeout=20000 ms) for a response for startLogSegment(1193538431). Succeeded so far: [192.XX.XXX.1 00:8485] 2017-07-06 20:51:20,687 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 13009 ms (timeout=20000 ms) for a response for startLogSegment(1193538431). Succeeded so far: [192.XX.XXX.1 00:8485] 2017-07-06 20:51:21,688 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 14010 ms (timeout=20000 ms) for a response for startLogSegment(1193538431). Succeeded so far: [192.XX.XXX.1 00:8485] 2017-07-06 20:51:22,688 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 15011 ms (timeout=20000 ms) for a response for startLogSegment(1193538431). Succeeded so far: [192.XX.XXX.1 00:8485] 2017-07-06 20:51:23,689 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 16012 ms (timeout=20000 ms) for a response for startLogSegment(1193538431). Succeeded so far: [192.XX.XXX.1 00:8485] 2017-07-06 20:51:24,691 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 17013 ms (timeout=20000 ms) for a response for startLogSegment(1193538431). Succeeded so far: [192.XX.XXX.1 00:8485] 2017-07-06 20:51:25,675 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30001 milliseconds 2017-07-06 20:51:25,692 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 18014 ms (timeout=20000 ms) for a response for startLogSegment(1193538431). Succeeded so far: [192.XX.XXX.100:8485] 2017-07-06 20:51:26,692 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 19015 ms (timeout=20000 ms) for a response for startLogSegment(1193538431). Succeeded so far: [192.XX.XXX.100:8485] 2017-07-06 20:51:27,678 ERROR org.apache.hadoop.hdfs.server.namenode.FSEditLog: Error: starting log segment 1193538431 failed for required journal (JournalAndStream(mgr=QJM to [192.XX.XXX.98:8485, 192.XX.XXX.99:8485, 192.XX.XXX.100:8485], stream=null)) java.io.IOException: Timed out waiting 20000ms for a quorum of nodes to respond. #### I have verified if there is any GC pause of IO bottleneck, I couldn't find any. I already increased heap size, RPC handlers on journal nodes for no help. No OS level exceptions either. The exception from "sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:270)" makes me suspecious about some java issue with handling sockets, could it be any java bug? And I am surprised why both journal nodes (running on different servers) ran into same exception at same time. We are on Cloudera CDH-5.5 version, hadoop/hdfs version 2.6.0. Any inputs would be greatly appreciated. Thank you for reading through. Let me know if I can provide additional logs. Thanks
... View more
Labels:
- Labels:
-
HDFS
03-07-2017
10:54 AM
The problem seems to be with configuration rather than dependency, I am not sure what configuration is missing. Here is my configuration : spark-defaults.conf : spark.authenticate=false spark.dynamicAllocation.enabled=true spark.dynamicAllocation.executorIdleTimeout=60 spark.dynamicAllocation.minExecutors=0 spark.dynamicAllocation.schedulerBacklogTimeout=1 spark.eventLog.dir=hdfs://dtest.turn.com:8020/user/spark/applicationHistory spark.eventLog.enabled=true spark.serializer=org.apache.spark.serializer.KryoSerializer spark.shuffle.service.enabled=true spark.shuffle.service.port=7337 spark.master=yarn spark.yarn.jars=hdfs://dtest.turn.com:8020/user/spark/spark-2.1-bin-hadoop/* spark.yarn.historyServer.address=http://dtest.turn.com:18088 spark.driver.extraLibraryPath=/opt/cloudera/parcels/CDH-5.5.4-1.cdh5.5.4.p0.9/lib/hadoop/lib/native spark.executor.extraLibraryPath=/opt/cloudera/parcels/CDH-5.5.4-1.cdh5.5.4.p0.9/lib/hadoop/lib/native spark.yarn.am.extraLibraryPath=/opt/cloudera/parcels/CDH-5.5.4-1.cdh5.5.4.p0.9/lib/hadoop/lib/native spark.yarn.config.gatewayPath=/opt/cloudera/parcels spark.yarn.config.replacementPath={{HADOOP_COMMON_HOME}}/../../..
... View more
03-06-2017
10:32 PM
It is apache spark 2.1 One Reference used (similar to our situation) : https://www.linkedin.com/pulse/running-spark-2xx-cloudera-hadoop-distro-cdh-deenar-toraskar-cfa
... View more
03-06-2017
09:46 PM
We are running into issues when we launch PySpark (with or without Yarn). It seems to be looking for hive-site.xml file which we already copied to spark configuration path but I am not sure if there are any specific parameters that should be part of. [apps@devdm003.dev1 ~]$ pyspark --master yarn --verbose WARNING: User-defined SPARK_HOME (/opt/spark) overrides detected (/opt/cloudera/parcels/CDH-5.5.4-1.cdh5.5.4.p0.9/lib/spark). WARNING: Running pyspark from user-defined location. Python 2.7.8 (default, Oct 22 2016, 09:02:55) [GCC 4.4.7 20120313 (Red Hat 4.4.7-17)] on linux2 Type "help", "copyright", "credits" or "license" for more information. Using properties file: /opt/cloudera/parcels/CDH-5.5.4-1.cdh5.5.4.p0.9/lib/spark/conf/spark-defaults.conf Adding default property: spark.serializer=org.apache.spark.serializer.KryoSerializer Adding default property: spark.yarn.jars=hdfs://devdm001.dev1.turn.com:8020/user/spark/spark-2.1-bin-hadoop/* Adding default property: spark.eventLog.enabled=true Adding default property: spark.shuffle.service.enabled=true Adding default property: spark.driver.extraLibraryPath=/opt/cloudera/parcels/CDH-5.5.4-1.cdh5.5.4.p0.9/lib/hadoop/lib/native Adding default property: spark.yarn.historyServer.address=http://devdm004.dev1.turn.com:18088 Adding default property: spark.dynamicAllocation.schedulerBacklogTimeout=1 Adding default property: spark.yarn.am.extraLibraryPath=/opt/cloudera/parcels/CDH-5.5.4-1.cdh5.5.4.p0.9/lib/hadoop/lib/native Adding default property: spark.yarn.config.gatewayPath=/opt/cloudera/parcels Adding default property: spark.yarn.config.replacementPath={{HADOOP_COMMON_HOME}}/../../.. Adding default property: spark.shuffle.service.port=7337 Adding default property: spark.master=yarn Adding default property: spark.authenticate=false Adding default property: spark.executor.extraLibraryPath=/opt/cloudera/parcels/CDH-5.5.4-1.cdh5.5.4.p0.9/lib/hadoop/lib/native Adding default property: spark.eventLog.dir=hdfs://devdm001.dev1.turn.com:8020/user/spark/applicationHistory Adding default property: spark.dynamicAllocation.enabled=true Adding default property: spark.dynamicAllocation.minExecutors=0 Adding default property: spark.dynamicAllocation.executorIdleTimeout=60 Parsed arguments: master yarn deployMode null executorMemory null executorCores null totalExecutorCores null propertiesFile /opt/cloudera/parcels/CDH-5.5.4-1.cdh5.5.4.p0.9/lib/spark/conf/spark-defaults.conf driverMemory null driverCores null driverExtraClassPath null driverExtraLibraryPath /opt/cloudera/parcels/CDH-5.5.4-1.cdh5.5.4.p0.9/lib/hadoop/lib/native driverExtraJavaOptions null supervise false queue null numExecutors null files null pyFiles null archives null mainClass null primaryResource pyspark-shell name PySparkShell childArgs [] jars null packages null packagesExclusions null repositories null verbose true Spark properties used, including those specified through --conf and those from the properties file /opt/cloudera/parcels/CDH-5.5.4-1.cdh5.5.4.p0.9/lib/spark/conf/spark-defaults.conf: spark.executor.extraLibraryPath -> /opt/cloudera/parcels/CDH-5.5.4-1.cdh5.5.4.p0.9/lib/hadoop/lib/native spark.yarn.jars -> hdfs://devdm001.dev1.turn.com:8020/user/spark/spark-2.1-bin-hadoop/* spark.driver.extraLibraryPath -> /opt/cloudera/parcels/CDH-5.5.4-1.cdh5.5.4.p0.9/lib/hadoop/lib/native spark.authenticate -> false spark.yarn.historyServer.address -> http://devdm004.dev1.turn.com:18088 spark.yarn.am.extraLibraryPath -> /opt/cloudera/parcels/CDH-5.5.4-1.cdh5.5.4.p0.9/lib/hadoop/lib/native spark.eventLog.enabled -> true spark.dynamicAllocation.schedulerBacklogTimeout -> 1 spark.yarn.config.gatewayPath -> /opt/cloudera/parcels spark.serializer -> org.apache.spark.serializer.KryoSerializer spark.dynamicAllocation.executorIdleTimeout -> 60 spark.dynamicAllocation.minExecutors -> 0 spark.shuffle.service.enabled -> true spark.yarn.config.replacementPath -> {{HADOOP_COMMON_HOME}}/../../.. spark.shuffle.service.port -> 7337 spark.eventLog.dir -> hdfs://devdm001.dev1.turn.com:8020/user/spark/applicationHistory spark.master -> yarn spark.dynamicAllocation.enabled -> true Main class: org.apache.spark.api.python.PythonGatewayServer Arguments: System properties: spark.executor.extraLibraryPath -> /opt/cloudera/parcels/CDH-5.5.4-1.cdh5.5.4.p0.9/lib/hadoop/lib/native spark.driver.extraLibraryPath -> /opt/cloudera/parcels/CDH-5.5.4-1.cdh5.5.4.p0.9/lib/hadoop/lib/native spark.yarn.jars -> hdfs://devdm001.dev1.turn.com:8020/user/spark/spark-2.1-bin-hadoop/* spark.authenticate -> false spark.yarn.historyServer.address -> http://devdm004.dev1.turn.com:18088 spark.yarn.am.extraLibraryPath -> /opt/cloudera/parcels/CDH-5.5.4-1.cdh5.5.4.p0.9/lib/hadoop/lib/native spark.eventLog.enabled -> true spark.dynamicAllocation.schedulerBacklogTimeout -> 1 SPARK_SUBMIT -> true spark.yarn.config.gatewayPath -> /opt/cloudera/parcels spark.serializer -> org.apache.spark.serializer.KryoSerializer spark.shuffle.service.enabled -> true spark.dynamicAllocation.minExecutors -> 0 spark.dynamicAllocation.executorIdleTimeout -> 60 spark.app.name -> PySparkShell spark.yarn.config.replacementPath -> {{HADOOP_COMMON_HOME}}/../../.. spark.submit.deployMode -> client spark.shuffle.service.port -> 7337 spark.eventLog.dir -> hdfs://devdm001.dev1.turn.com:8020/user/spark/applicationHistory spark.master -> yarn spark.yarn.isPython -> true spark.dynamicAllocation.enabled -> true Classpath elements: log4j:ERROR Could not find value for key log4j.appender.WARN log4j:ERROR Could not instantiate appender named "WARN". log4j:ERROR Could not find value for key log4j.appender.DEBUG log4j:ERROR Could not instantiate appender named "DEBUG". Setting default log level to "WARN". To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.5.4-1.cdh5.5.4.p0.9/jars/avro-tools-1.7.6-cdh5.5.4.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.5.4-1.cdh5.5.4.p0.9/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/server/turn/deploy/160622/turn/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. Traceback (most recent call last): File "/opt/spark/python/pyspark/shell.py", line 43, in <module> spark = SparkSession.builder\ File "/opt/spark/python/pyspark/sql/session.py", line 179, in getOrCreate session._jsparkSession.sessionState().conf().setConfString(key, value) File "/opt/spark/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1133, in __call__ File "/opt/spark/python/pyspark/sql/utils.py", line 79, in deco raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace) pyspark.sql.utils.IllegalArgumentException: u"Error while instantiating 'org.apache.spark.sql.hive.HiveSessionState':" We installed Spark 2.1 for business reasons and updated SPARK_HOME variable in safety valve. (Ensured SPARK_HOME is set early in spark-env.sh so other PATH variables are set properly). I also learnt that there is no hive-site.xml dependency with spark 2.1 which confuses me more for reasons it is looking into. Did anyone face similar issue, any suggestions? This is a linux environment running CDH5.5.4
... View more
Labels:
- Labels:
-
Apache Spark
-
Cloudera Manager