Support Questions

Find answers, ask questions, and share your expertise

Spark 1.5 - SparkException: Yarn application has already ended!

avatar
Contributor

Hi,

I have setup Spark 1.5 through Ambari on HDP2.3.4 Ambari reports all the services on HDP are running fine including Spark. I have installed Zeppelin and trying to use notebook as documented in Zeppelin tech preview notebook.

However, when I run sc.version, I am seeing an error

org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
 at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:119)
 at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:64)
 at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:141)
 at org.apache.spark.SparkContext.<init>(SparkContext.scala:497)
 at org.apache.zeppelin.spark.SparkInterpreter.createSparkContext(SparkInterpreter.java:339)
 at org.apache.zeppelin.spark.SparkInterpreter.getSparkContext(SparkInterpreter.java:149)
 at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:465)
 at org.apache.zeppelin.interpreter.ClassloaderInterpreter.open(ClassloaderInterpreter.java:74)
 at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:68)
 at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:92)
 at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:276)
 at org.apache.zeppelin.scheduler.Job.run(Job.java:170)
 at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:118)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
 at java.util.concurrent.FutureTask.run(FutureTask.java:266)
 at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
 at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 at java.lang.Thread.run(Thread.java:745)

Let me know what is causing this. Does it relate to any configuration issue? Attached is the screenshot for the interpreter configuration.

1633-zeppelin-interpreter-settings.png

1 ACCEPTED SOLUTION

avatar
Contributor

@Artem Ervits This is still an outstanding issue with 1.5.2. No workaround has been found. However, I have now upgraded to Spark 1.6 integrated with Zeppelin, which works fine.

View solution in original post

25 REPLIES 25

avatar
Contributor

Forgot to mention that I havent exported SPARK_HOME in zeppelin-env.sh as I thought Zeppelin would pick it from HDP configuration

avatar
Master Mentor
@vbhoomireddy

org.apache.spark.SparkException:Yarn application has already ended!It might have been killed or unable to launch application master.

Make sure that Yarn and Spark is up.

avatar
Contributor

@Neeraj Sabharwal

Cluster is up and running. I am able to run terasort programs for YARN. Also able to run SparkPi example from spark shell.

./bin/spark-submit --class org.apache.spark.examples.SparkPi--master yarn-client --num-executors 3--driver-memory 512m--executor-memory 512m--executor-cores 1 lib/spark-examples*.jar 10

avatar
Master Mentor

@vbhoomireddy

Recheck the interpreter settings, save the interpreter....restart the interpreter then try

if not working then restart zeppelin then try

avatar
Master Mentor

@vbhoomireddy

Are you using ambari based install to run Zeppelin ?

avatar
Master Mentor

@vbhoomireddy See this for port change

avatar
Contributor

Thanks @Neeraj Sabharwal I just figued out that it started running on port 4044. Looks like its a container launch issue. When I login to the RM UI, and looking at the attempt log, I am seeing the below issue.

AM Container for appattempt_1453983048063_0058_000002 exited with exitCode: 1

Diagnostics: Exception from container-launch. Container id: container_e08_1453983048063_0058_02_000001 Exit code: 1 Stack trace: ExitCodeException exitCode=1:

Trying to understand why container launch is failing. Any ideas please?

Thanks

avatar
Contributor

zeppelin-env.sh has been updated with the below configuration.

export HADOOP_CONF_DIR=/etc/hadoop/conf

export ZEPPELIN_PORT=9995

export ZEPPELIN_JAVA_OPTS="-Dhdp.version=2.3.4.0-3485"

Just wondering whether Zeppelin is not able to pick Hadoop configuration properly and henc failing.

avatar
Master Mentor

@vbhoomireddy There is possiblity..Please follow/check this step by step http://hortonworks.com/hadoop-tutorial/apache-zeppelin/