Support Questions

Find answers, ask questions, and share your expertise

No output from Zeppelin on HDP 2.3 using Spark 1.4 with Zeppelin 0.6.

avatar

After following the Apache Zeppelin setup provided here - https://urldefense.proofpoint.com/v2/url?u=http-3A... Zeppelin notebook does not show output after executing commands successfully.

Here's a subset of the errors seen in YARN logs:

Stack trace: ExitCodeException exitCode=1: /grid/1/hadoop/yarn/local/usercache/root/appcache/application_1447968118518_0003/container_e03_1447968118518_0003_02_000004/launch_container.sh: line 23: :/usr/hdp/current/spark-client/bin/zeppelin-0.6.0-incubating-SNAPSHOT/interpreter/spark/dep/*:/usr/hdp/current/spark-client/bin/zeppelin-0.6.0-incubating-SNAPSHOT/interpreter/spark/*:/usr/hdp/current/spark-client/bin/zeppelin-0.6.0-incubating-SNAPSHOT/lib/*:/usr/hdp/current/spark-client/bin/zeppelin-0.6.0-incubating-SNAPSHOT/*::/usr/hdp/current/spark-client/bin/zeppelin-0.6.0-incubating-SNAPSHOT/conf:/usr/hdp/current/spark-client/bin/zeppelin-0.6.0-incubating-SNAPSHOT/conf:/usr/hdp/current/spark-client/bin/zeppelin-0.6.0-incubating-SNAPSHOT/conf:/etc/hadoop/conf:$PWD:$PWD/__spark__.jar:$HADOOP_CONF_DIR:/usr/hdp/current/hadoop-client/*:/usr/hdp/current/hadoop-client/lib/*:/usr/hdp/current/hadoop-hdfs-client/*:/usr/hdp/current/hadoop-hdfs-client/lib/*:/usr/hdp/current/hadoop-yarn-client/*:/usr/hdp/current/hadoop-yarn-client/lib/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/lib/*:$PWD/mr-framework/hadoop/share/hadoop/common/*:$PWD/mr-framework/hadoop/share/hadoop/common/lib/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/lib/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/lib/*:$PWD/mr-framework/hadoop/share/hadoop/tools/lib/*:/usr/hdp/${hdp.version}/hadoop/lib/hadoop-lzo-0.6.0.${hdp.version}.jar:/etc/hadoop/conf/secure: bad substitution

Noticed the mapred-site.xml had "${hdp.version}" variables that were not replaced. The workaround was replacing the variable with the actual hdp version in the mapred-site.xml then restarting. See the screenshot below:

608-image002.png

This is posted as an FYI in case anyone else runs into a similar issue. I don't have a root cause for this behavior at this time.

1 ACCEPTED SOLUTION

avatar

@Ameet Paranjape Can you double check you modified the zeppelin-env.sh as mentioned in the blog? The hdp.version should be replaced automatically without users having to change mapred-site configs (check note below about "message related to bad substitution")

In the zeppelin-env.sh file, add the following.

Note: you will use PORT to access the Zeppelin Web UI. <HDP-version> corresponds to the version of HDP where you are installing Zeppelin; for example, 2.3.2.0-2950.


export HADOOP_CONF_DIR=/etc/hadoop/conf
export ZEPPELIN_PORT=9995
export ZEPPELIN_JAVA_OPTS="-Dhdp.version=<HDP-version>"

To obtain the HDP version for your HDP cluster, run the following command:
 hdp-select status hadoop-client | sed 's/hadoop-client - \(.*\)/\1/'
Add the following properties and settings:
spark.driver.extraJavaOptions -Dhdp.version=2.3.2.0-2950 
spark.yarn.am.extraJavaOptions -Dhdp.version=2.3.2.0-2950

Note 
Make sure that both spark.driver.extraJavaOptions & spark.yarn.am.extraJavaOptions are saved. 
Without these properties set, the Spark job will fail with message related to bad substitution

Also note that the version of zeppelin used by in the TP blog is actually an early version of 0.5.5.

I tested both this version as well as the released 0.5.5 zeppelin HDP 2.3.2 with both Spark 1.4.1 and Spark 1.5.1TP using the Zeppelin service and did not encounter issues running Spark code. The issues I encountered were around Hive (see here for details)

View solution in original post

7 REPLIES 7

avatar

@Ameet Paranjape Can you double check you modified the zeppelin-env.sh as mentioned in the blog? The hdp.version should be replaced automatically without users having to change mapred-site configs (check note below about "message related to bad substitution")

In the zeppelin-env.sh file, add the following.

Note: you will use PORT to access the Zeppelin Web UI. <HDP-version> corresponds to the version of HDP where you are installing Zeppelin; for example, 2.3.2.0-2950.


export HADOOP_CONF_DIR=/etc/hadoop/conf
export ZEPPELIN_PORT=9995
export ZEPPELIN_JAVA_OPTS="-Dhdp.version=<HDP-version>"

To obtain the HDP version for your HDP cluster, run the following command:
 hdp-select status hadoop-client | sed 's/hadoop-client - \(.*\)/\1/'
Add the following properties and settings:
spark.driver.extraJavaOptions -Dhdp.version=2.3.2.0-2950 
spark.yarn.am.extraJavaOptions -Dhdp.version=2.3.2.0-2950

Note 
Make sure that both spark.driver.extraJavaOptions & spark.yarn.am.extraJavaOptions are saved. 
Without these properties set, the Spark job will fail with message related to bad substitution

Also note that the version of zeppelin used by in the TP blog is actually an early version of 0.5.5.

I tested both this version as well as the released 0.5.5 zeppelin HDP 2.3.2 with both Spark 1.4.1 and Spark 1.5.1TP using the Zeppelin service and did not encounter issues running Spark code. The issues I encountered were around Hive (see here for details)

avatar
Master Mentor

Accepting this as best answer. Thanks @Ali Bajwa

avatar
Contributor

As @Ali Bajwa wrote above, use the Zeppelin Service to install Zeppelin on HDP.

avatar

@Ali Bajwa and @Dhruv Kumar, thanks for the suggestions. Like you I could not reproduce this on a fresh install. I no longer have access to the environment that was showing this behavior but I know it had gone through multiple Zeppelin version changes and perhaps that caused this behavior...

avatar
Master Mentor

@Ameet Paranjape @Mark Herring

Shall we close this question as not reproducible?

avatar

Sure! I have seen the same issue of hdp.version not getting replaced lots of times and it usually always is a result of missing the hdp.version details in interpreter or zeppelin-env.sh or spark-defaults.conf. Following the steps from the blog (or using Ambari service) should help with this issue.

avatar

I would mark Ali's one as the accepted answer... ?