Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Spark 1.5 - SparkException: Yarn application has already ended!

avatar
Contributor

Hi,

I have setup Spark 1.5 through Ambari on HDP2.3.4 Ambari reports all the services on HDP are running fine including Spark. I have installed Zeppelin and trying to use notebook as documented in Zeppelin tech preview notebook.

However, when I run sc.version, I am seeing an error

org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
 at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:119)
 at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:64)
 at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:141)
 at org.apache.spark.SparkContext.<init>(SparkContext.scala:497)
 at org.apache.zeppelin.spark.SparkInterpreter.createSparkContext(SparkInterpreter.java:339)
 at org.apache.zeppelin.spark.SparkInterpreter.getSparkContext(SparkInterpreter.java:149)
 at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:465)
 at org.apache.zeppelin.interpreter.ClassloaderInterpreter.open(ClassloaderInterpreter.java:74)
 at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:68)
 at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:92)
 at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:276)
 at org.apache.zeppelin.scheduler.Job.run(Job.java:170)
 at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:118)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
 at java.util.concurrent.FutureTask.run(FutureTask.java:266)
 at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
 at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 at java.lang.Thread.run(Thread.java:745)

Let me know what is causing this. Does it relate to any configuration issue? Attached is the screenshot for the interpreter configuration.

1633-zeppelin-interpreter-settings.png

1 ACCEPTED SOLUTION

avatar
Contributor

@Artem Ervits This is still an outstanding issue with 1.5.2. No workaround has been found. However, I have now upgraded to Spark 1.6 integrated with Zeppelin, which works fine.

View solution in original post

25 REPLIES 25

avatar
Contributor

@Artem Ervits This is still an outstanding issue with 1.5.2. No workaround has been found. However, I have now upgraded to Spark 1.6 integrated with Zeppelin, which works fine.

avatar
Expert Contributor

@vbhoomireddy

@Neeraj Sabharwal

I have now made it work. Zeppelin 0.6.0 works now on Spark 1.5.2.

2340-zeppelin-spark-version-1-5-2.jpg

If this is still of someone's interest let me know, I can describe what I did.

avatar
New Contributor

Hi, we have the same problem with the same spark and hadoop versions. How you solve the problem? Thank you very much

avatar
Expert Contributor

@Jorge de la peña Jorge, next time you should quote the user you are talking to. Pure luck I saw your post. Thank for posting this, btw. It motivated me to finally put together the How-I-did-it

https://markobigdata.wordpress.com/2016/03/08/building-apache-zeppelin-0-6-0-on-spark-1-5-2-in-a-clu...

Hope it helps!

avatar
New Contributor

@marko thank you for the blog, but we have a problem. Once installed and started the daemon, the zeppelin status is "zeppelin running but process is dead" and we cannot connect to the page. Do you know why? thanks

avatar
Expert Contributor

@Jorge de la peña

Have you checked the log files? Btw, if you can upgrade to Spark 1.6.0, then you should do it.