Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Zeppelin not working on upgrading to HDP 2.3.4

avatar
New Contributor

We installed zeppelin as an ambari service as per the link below . It was working fine and we were able to run spark and hive jobs. Recently we upgraded to HDP 2.3.4 and the zeppelin service is not working. Is there any manual steps required to get zeppelin working with new HDP and spark version as zeppelin was not upgraded as part of the automatic HDP upgrade process.

https://github.com/hortonworks-gallery/ambari-zeppelin-service .

This is the error we are getting.

16/01/28 14:41:44 INFO YarnExtensionServices: Service org.apache.spark.deploy.yarn.history.YarnHistoryService started 
16/01/28 14:41:45 INFO Client: Application report for application_1453980715711_0009 (state: ACCEPTED) 
16/01/28 14:41:45 INFO Client: 
client token: N/A 
diagnostics: N/A 
ApplicationMaster host: N/A 
ApplicationMaster RPC port: -1 
queue: default 
start time: 1453992104608 
final status: UNDEFINED 
tracking URL: http://dshost1:8088/proxy/application_1453980715711_0009/
user: zeppelin 
16/01/28 14:41:46 INFO Client: Application report for application_1453980715711_0009 (state: ACCEPTED) 
16/01/28 14:41:47 INFO Client: Application report for application_1453980715711_0009 (state: FAILED) 
16/01/28 14:41:47 INFO Client: 
client token: N/A 
diagnostics: Application application_1453980715711_0009 failed 2 times due to AM Container for appattempt_1453980715711_0009_000002 exited with exitCode: 1 
For more detailed output, check application tracking page:http://dshost1:8088/cluster/app/application_1453980715711_0009Then, click on links to logs of each attempt. 
Diagnostics: Exception from container-launch. 
Container id: container_e18_1453980715711_0009_02_000001 
Exit code: 1 
Stack trace: ExitCodeException exitCode=1: 
at org.apache.hadoop.util.Shell.runCommand(Shell.java:576) 
at org.apache.hadoop.util.Shell.run(Shell.java:487) 
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:753) 
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212) 
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) 
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) 
at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:745) 
Container exited with a non-zero exit code 1 
Failing this attempt. Failing the application. 
ApplicationMaster host: N/A 
ApplicationMaster RPC port: -1 
queue: default 
start time: 1453992104608 
final status: FAILED 
tracking URL: http://datascience3.unibet.com:8088/cluster/app/application_1453980715711_0009
user: zeppelin 
16/01/28 14:41:47 INFO Client: Deleting staging directory .sparkStaging/application_1453980715711_0009 
16/01/28 14:41:47 ERROR SparkContext: Error initializing SparkContext. 
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master. 
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:125) 
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:65) 
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144) 
at org.apache.spark.SparkContext.<init>(SparkContext.scala:523) 
at org.apache.zeppelin.spark.SparkInterpreter.createSparkContext(SparkInterpreter.java:339) 
at org.apache.zeppelin.spark.SparkInterpreter.getSparkContext(SparkInterpreter.java:149) 
at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:465) 
at org.apache.zeppelin.interpreter.ClassloaderInterpreter.open(ClassloaderInterpreter.java:74) 
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:68) 
at org.apache.zeppelin.spark.SparkSqlInterpreter.getSparkInterpreter(SparkSqlInterpreter.java:103) 
at org.apache.zeppelin.spark.SparkSqlInterpreter.interpret(SparkSqlInterpreter.java:119) 
at org.apache.zeppelin.interpreter.ClassloaderInterpreter.interpret(ClassloaderInterpreter.java:57) 
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:93) 
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:276) 
at org.apache.zeppelin.scheduler.Job.run(Job.java:170) 
1 ACCEPTED SOLUTION

avatar

The Zeppelin ambari service downloads version of Zeppelin prebuilt with version of Spark found in your HDP installation. This version probably changed when you upgraded the cluster which is resulting in your error. Easiest thing would be to uninstall and re-install Zeppelin service. Step by step instructions here

https://github.com/hortonworks-gallery/ambari-zeppelin-service#remove-zeppelin-service

In the future, there will be enhancements made to allow the service to support upgrades.

View solution in original post

4 REPLIES 4

avatar

The Zeppelin ambari service downloads version of Zeppelin prebuilt with version of Spark found in your HDP installation. This version probably changed when you upgraded the cluster which is resulting in your error. Easiest thing would be to uninstall and re-install Zeppelin service. Step by step instructions here

https://github.com/hortonworks-gallery/ambari-zeppelin-service#remove-zeppelin-service

In the future, there will be enhancements made to allow the service to support upgrades.

avatar
New Contributor

Thanks. I had some problems trying to remove and add the service again. I will try again later.

avatar
New Contributor

I finally managed to get it working by removing the service and add it again. Thanks very much for the input.

avatar

Glad to hear it!