<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Spark2.1 in 5.11 can't start a yarn cluster job. in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/Spark2-1-in-5-11-can-t-start-a-yarn-cluster-job/m-p/54773#M23660</link>
    <description>&lt;P&gt;This is some error caused by your app, rather than a Spark issue. You need to find the executor logs from the app and see what happened.&lt;/P&gt;</description>
    <pubDate>Tue, 16 May 2017 09:05:57 GMT</pubDate>
    <dc:creator>srowen</dc:creator>
    <dc:date>2017-05-16T09:05:57Z</dc:date>
    <item>
      <title>Spark2.1 in 5.11 can't start a yarn cluster job.</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Spark2-1-in-5-11-can-t-start-a-yarn-cluster-job/m-p/54768#M23659</link>
      <description>&lt;P&gt;My start script:&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;#!/bin/bash&lt;/P&gt;&lt;P&gt;set -ux&lt;/P&gt;&lt;P&gt;class=com.palmaplus.amapdata.Main&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;spark-submit \&lt;BR /&gt;--master yarn \&lt;BR /&gt;--deploy-mode cluster \&lt;BR /&gt;--num-executors 12 \&lt;BR /&gt;--driver-cores 1 \&lt;BR /&gt;--executor-cores 1 \&lt;BR /&gt;--driver-memory 4G \&lt;BR /&gt;--executor-memory 2G \&lt;BR /&gt;--conf spark.default.parallelism=24 \&lt;BR /&gt;--conf spark.shuffle.compress=false \&lt;BR /&gt;--conf spark.storage.memoryFraction=0.2 \&lt;BR /&gt;--conf "spark.driver.extraJavaOptions=-Dlog4j.configuration=log4j-spark.properties -XX:+UseG1GC -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/lib/hadoop-yarn" \&lt;BR /&gt;--conf "spark.executor.extraJavaOptions=-Dlog4j.configuration=log4j-spark.properties -XX:+UseG1GC -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/lib/hadoop-yarn" \&lt;BR /&gt;--class $class \&lt;BR /&gt;--files "/data/projects/superdb/conf/application.conf,/data/projects/superdb/conf/brand-blacklist.txt" \&lt;BR /&gt;/data/projects/superdb/jar/amap-data-1.0-SNAPSHOT-jar-with-dependencies.jar \&lt;/P&gt;&lt;P&gt;exit 0&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;=====================================console log below===============================&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;17/05/16 14:37:11 INFO yarn.Client: Application report for application_1494307194668_0014 (state: ACCEPTED)&lt;BR /&gt;17/05/16 14:37:12 INFO yarn.Client: Application report for application_1494307194668_0014 (state: ACCEPTED)&lt;BR /&gt;17/05/16 14:37:13 INFO yarn.Client: Application report for application_1494307194668_0014 (state: FAILED)&lt;BR /&gt;17/05/16 14:37:13 INFO yarn.Client:&lt;BR /&gt;client token: N/A&lt;BR /&gt;diagnostics: Application application_1494307194668_0014 failed 2 times due to AM Container for appattempt_1494307194668_0014_000002 exited with exitCode: 15&lt;BR /&gt;For more detailed output, check application tracking page:&lt;A href="http://master:8088/proxy/application_1494307194668_0014/Then" target="_blank"&gt;http://master:8088/proxy/application_1494307194668_0014/Then&lt;/A&gt;, click on links to logs of each attempt.&lt;BR /&gt;Diagnostics: Exception from container-launch.&lt;BR /&gt;Container id: container_1494307194668_0014_02_000001&lt;BR /&gt;Exit code: 15&lt;BR /&gt;Stack trace: ExitCodeException exitCode=15:&lt;BR /&gt;at org.apache.hadoop.util.Shell.runCommand(Shell.java:601)&lt;BR /&gt;at org.apache.hadoop.util.Shell.run(Shell.java:504)&lt;BR /&gt;at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:786)&lt;BR /&gt;at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:213)&lt;BR /&gt;at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)&lt;BR /&gt;at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)&lt;BR /&gt;at java.util.concurrent.FutureTask.run(FutureTask.java:266)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)&lt;BR /&gt;at java.lang.Thread.run(Thread.java:745)&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;Container exited with a non-zero exit code 15&lt;BR /&gt;Failing this attempt. Failing the application.&lt;BR /&gt;ApplicationMaster host: N/A&lt;BR /&gt;ApplicationMaster RPC port: -1&lt;BR /&gt;queue: root.users.hdfs&lt;BR /&gt;start time: 1494916610862&lt;BR /&gt;final status: FAILED&lt;BR /&gt;tracking URL: &lt;A href="http://master:8088/cluster/app/application_1494307194668_0014" target="_blank"&gt;http://master:8088/cluster/app/application_1494307194668_0014&lt;/A&gt;&lt;BR /&gt;user: hdfs&lt;BR /&gt;Exception in thread "main" org.apache.spark.SparkException: Application application_1494307194668_0014 finished with failed status&lt;BR /&gt;at org.apache.spark.deploy.yarn.Client.run(Client.scala:1167)&lt;BR /&gt;at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1213)&lt;BR /&gt;at org.apache.spark.deploy.yarn.Client.main(Client.scala)&lt;BR /&gt;at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)&lt;BR /&gt;at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)&lt;BR /&gt;at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)&lt;BR /&gt;at java.lang.reflect.Method.invoke(Method.java:498)&lt;BR /&gt;at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:738)&lt;BR /&gt;at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)&lt;BR /&gt;at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)&lt;BR /&gt;at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)&lt;BR /&gt;at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)&lt;BR /&gt;17/05/16 14:37:13 INFO util.ShutdownHookManager: Shutdown hook called&lt;BR /&gt;17/05/16 14:37:13 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-d231fd6a-4752-4dc1-b041-080756d9c5aa&lt;BR /&gt;+ exit 0&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;================yarn logs -applicationId application_1494307194668_0014=========================&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;17/05/16 14:39:14 INFO client.RMProxy: Connecting to ResourceManager at master/10.0.25.5:8032&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;Container: container_1494307194668_0014_02_000001 on slave2_8041&lt;BR /&gt;==================================================================&lt;BR /&gt;LogType:stderr&lt;BR /&gt;Log Upload Time:Tue May 16 14:37:14 +0800 2017&lt;BR /&gt;LogLength:1742&lt;BR /&gt;Log Contents:&lt;BR /&gt;Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties&lt;BR /&gt;SLF4J: Class path contains multiple SLF4J bindings.&lt;BR /&gt;SLF4J: Found binding in [jar:file:/data/yarn/nm/usercache/hdfs/filecache/43/__spark_libs__8927180205054833354.zip/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]&lt;BR /&gt;SLF4J: Found binding in [jar:file:/data/cloudera/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]&lt;BR /&gt;SLF4J: See &lt;A href="http://www.slf4j.org/codes.html#multiple_bindings" target="_blank"&gt;http://www.slf4j.org/codes.html#multiple_bindings&lt;/A&gt; for an explanation.&lt;BR /&gt;SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]&lt;BR /&gt;17/05/16 14:37:11 INFO SignalUtils: Registered signal handler for TERM&lt;BR /&gt;17/05/16 14:37:11 INFO SignalUtils: Registered signal handler for HUP&lt;BR /&gt;17/05/16 14:37:11 INFO SignalUtils: Registered signal handler for INT&lt;BR /&gt;17/05/16 14:37:12 INFO ApplicationMaster: Preparing Local resources&lt;BR /&gt;17/05/16 14:37:12 INFO ApplicationMaster: ApplicationAttemptId: appattempt_1494307194668_0014_000002&lt;BR /&gt;17/05/16 14:37:12 INFO SecurityManager: Changing view acls to: yarn,hdfs&lt;BR /&gt;17/05/16 14:37:12 INFO SecurityManager: Changing modify acls to: yarn,hdfs&lt;BR /&gt;17/05/16 14:37:12 INFO SecurityManager: Changing view acls groups to:&lt;BR /&gt;17/05/16 14:37:12 INFO SecurityManager: Changing modify acls groups to:&lt;BR /&gt;17/05/16 14:37:12 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, hdfs); groups with view permissions: Set(); users with modify permissions: Set(yarn, hdfs); groups with modify permissions: Set()&lt;BR /&gt;17/05/16 14:37:12 INFO ApplicationMaster: Starting the user application in a separate Thread&lt;BR /&gt;17/05/16 14:37:12 INFO ApplicationMaster: Waiting for spark context initialization...&lt;/P&gt;&lt;P&gt;LogType:stdout&lt;BR /&gt;Log Upload Time:Tue May 16 14:37:14 +0800 2017&lt;BR /&gt;LogLength:0&lt;BR /&gt;Log Contents:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Container: container_1494307194668_0014_01_000001 on slave3_8041&lt;BR /&gt;==================================================================&lt;BR /&gt;LogType:stderr&lt;BR /&gt;Log Upload Time:Tue May 16 14:37:13 +0800 2017&lt;BR /&gt;LogLength:1742&lt;BR /&gt;Log Contents:&lt;BR /&gt;Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties&lt;BR /&gt;SLF4J: Class path contains multiple SLF4J bindings.&lt;BR /&gt;SLF4J: Found binding in [jar:file:/data/yarn/nm/usercache/hdfs/filecache/44/__spark_libs__8927180205054833354.zip/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]&lt;BR /&gt;SLF4J: Found binding in [jar:file:/data/cloudera/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]&lt;BR /&gt;SLF4J: See &lt;A href="http://www.slf4j.org/codes.html#multiple_bindings" target="_blank"&gt;http://www.slf4j.org/codes.html#multiple_bindings&lt;/A&gt; for an explanation.&lt;BR /&gt;SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]&lt;BR /&gt;17/05/16 14:36:58 INFO SignalUtils: Registered signal handler for TERM&lt;BR /&gt;17/05/16 14:36:58 INFO SignalUtils: Registered signal handler for HUP&lt;BR /&gt;17/05/16 14:36:58 INFO SignalUtils: Registered signal handler for INT&lt;BR /&gt;17/05/16 14:37:06 INFO ApplicationMaster: Preparing Local resources&lt;BR /&gt;17/05/16 14:37:07 INFO ApplicationMaster: ApplicationAttemptId: appattempt_1494307194668_0014_000001&lt;BR /&gt;17/05/16 14:37:07 INFO SecurityManager: Changing view acls to: yarn,hdfs&lt;BR /&gt;17/05/16 14:37:07 INFO SecurityManager: Changing modify acls to: yarn,hdfs&lt;BR /&gt;17/05/16 14:37:07 INFO SecurityManager: Changing view acls groups to:&lt;BR /&gt;17/05/16 14:37:07 INFO SecurityManager: Changing modify acls groups to:&lt;BR /&gt;17/05/16 14:37:07 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, hdfs); groups with view permissions: Set(); users with modify permissions: Set(yarn, hdfs); groups with modify permissions: Set()&lt;BR /&gt;17/05/16 14:37:07 INFO ApplicationMaster: Starting the user application in a separate Thread&lt;BR /&gt;17/05/16 14:37:07 INFO ApplicationMaster: Waiting for spark context initialization...&lt;/P&gt;&lt;P&gt;LogType:stdout&lt;BR /&gt;Log Upload Time:Tue May 16 14:37:13 +0800 2017&lt;BR /&gt;LogLength:0&lt;BR /&gt;Log Contents:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Any help plz.&lt;/P&gt;</description>
      <pubDate>Fri, 16 Sep 2022 11:37:06 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Spark2-1-in-5-11-can-t-start-a-yarn-cluster-job/m-p/54768#M23659</guid>
      <dc:creator>rotciv</dc:creator>
      <dc:date>2022-09-16T11:37:06Z</dc:date>
    </item>
    <item>
      <title>Re: Spark2.1 in 5.11 can't start a yarn cluster job.</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Spark2-1-in-5-11-can-t-start-a-yarn-cluster-job/m-p/54773#M23660</link>
      <description>&lt;P&gt;This is some error caused by your app, rather than a Spark issue. You need to find the executor logs from the app and see what happened.&lt;/P&gt;</description>
      <pubDate>Tue, 16 May 2017 09:05:57 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Spark2-1-in-5-11-can-t-start-a-yarn-cluster-job/m-p/54773#M23660</guid>
      <dc:creator>srowen</dc:creator>
      <dc:date>2017-05-16T09:05:57Z</dc:date>
    </item>
    <item>
      <title>Re: Spark2.1 in 5.11 can't start a yarn cluster job.</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Spark2-1-in-5-11-can-t-start-a-yarn-cluster-job/m-p/54779#M23661</link>
      <description>&lt;P&gt;You're right, the reason is that I didn't initialize a SparkContext until receiving a message from kafka.&lt;/P&gt;</description>
      <pubDate>Tue, 16 May 2017 10:29:25 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Spark2-1-in-5-11-can-t-start-a-yarn-cluster-job/m-p/54779#M23661</guid>
      <dc:creator>rotciv</dc:creator>
      <dc:date>2017-05-16T10:29:25Z</dc:date>
    </item>
  </channel>
</rss>

