Created on 01-29-2016 11:07 AM - edited 08-19-2019 03:48 AM
Hi,
I have setup Spark 1.5 through Ambari on HDP2.3.4 Ambari reports all the services on HDP are running fine including Spark. I have installed Zeppelin and trying to use notebook as documented in Zeppelin tech preview notebook.
However, when I run sc.version, I am seeing an error
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master. at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:119) at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:64) at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:141) at org.apache.spark.SparkContext.<init>(SparkContext.scala:497) at org.apache.zeppelin.spark.SparkInterpreter.createSparkContext(SparkInterpreter.java:339) at org.apache.zeppelin.spark.SparkInterpreter.getSparkContext(SparkInterpreter.java:149) at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:465) at org.apache.zeppelin.interpreter.ClassloaderInterpreter.open(ClassloaderInterpreter.java:74) at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:68) at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:92) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:276) at org.apache.zeppelin.scheduler.Job.run(Job.java:170) at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:118) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)
Let me know what is causing this. Does it relate to any configuration issue? Attached is the screenshot for the interpreter configuration.
Created 02-02-2016 08:02 AM
@Artem Ervits This is still an outstanding issue with 1.5.2. No workaround has been found. However, I have now upgraded to Spark 1.6 integrated with Zeppelin, which works fine.
Created 02-02-2016 08:02 AM
@Artem Ervits This is still an outstanding issue with 1.5.2. No workaround has been found. However, I have now upgraded to Spark 1.6 integrated with Zeppelin, which works fine.
Created on 02-23-2016 08:12 AM - edited 08-19-2019 03:47 AM
I have now made it work. Zeppelin 0.6.0 works now on Spark 1.5.2.
If this is still of someone's interest let me know, I can describe what I did.
Created 03-07-2016 05:28 PM
Hi, we have the same problem with the same spark and hadoop versions. How you solve the problem? Thank you very much
Created 03-08-2016 10:32 AM
@Jorge de la peña Jorge, next time you should quote the user you are talking to. Pure luck I saw your post. Thank for posting this, btw. It motivated me to finally put together the How-I-did-it
Hope it helps!
Created 03-09-2016 11:00 AM
@marko thank you for the blog, but we have a problem. Once installed and started the daemon, the zeppelin status is "zeppelin running but process is dead" and we cannot connect to the page. Do you know why? thanks
Created 03-09-2016 11:20 AM
Have you checked the log files? Btw, if you can upgrade to Spark 1.6.0, then you should do it.