Support Questions
Find answers, ask questions, and share your expertise

Spark 2.1 with Zeppelin 0.7.0 on HDP 2.6

New Contributor

I am trying to get Spark 2.1 working on Zeppelin 0.7.0 running on HDP The spark interpreter seems to either timeout when trying to start or it is throwing an exception that I am yet to be able to observe. I found guidance related to a pre-release HDP suggesting that I needed to comment out SPARK_HOME from the file. If I do this YARN fails to with the message that SPARK_HOME is not defined.

So I have also tried configuring zeppelin to use the HDPs Spark2 installation. If I set SPARK_HOME to /usr/hdp/current/spark2-client/ I get the following exception:

INFO [2017-04-14 12:19:41,497] ({pool-2-thread-2}[start]:126) - Run interpreter process [/usr/hdp/current/zeppelin-server/bin/, -d, /usr/hdp/current/zep pelin-server/interpreter/spark, -p, 41150, -l, /usr/hdp/current/zeppelin-server/local-repo/2C3P5E7QX, -g, spark] INFO [2017-04-14 12:19:42,389] ({Exec Default Executor}[onProcessComplete]:170) - Interpreter process exited 0 ERROR [2017-04-14 12:20:11,602] ({pool-2-thread-2}[run]:188) - Job failed org.apache.zeppelin.interpreter.InterpreterException: org.apache.zeppelin.interpreter.InterpreterException: org.apache.thrift.transport.TTransportException: Connection refused at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.init( at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.getFormType( at org.apache.zeppelin.interpreter.LazyOpenInterpreter.getFormType( at org.apache.zeppelin.notebook.Paragraph.jobRun( at at org.apache.zeppelin.scheduler.RemoteScheduler$ at java.util.concurrent.Executors$ at at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201( at java.util.concurrent.ScheduledThreadPoolExecutor$ at java.util.concurrent.ThreadPoolExecutor.runWorker( at java.util.concurrent.ThreadPoolExecutor$ at Caused by: org.apache.zeppelin.interpreter.InterpreterException: org.apache.thrift.transport.TTransportException: Connection refused at org.apache.zeppelin.interpreter.remote.ClientFactory.create( at org.apache.zeppelin.interpreter.remote.ClientFactory.create( at org.apache.commons.pool2.BasePooledObjectFactory.makeObject( at org.apache.commons.pool2.impl.GenericObjectPool.create( at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject( at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject( at org.apache.zeppelin.interpreter.remote.RemoteInterpreterProcess.getClient( at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.init( ... 12 more Caused by: org.apache.thrift.transport.TTransportException: Connection refused at at org.apache.zeppelin.interpreter.remote.ClientFactory.create( ... 19 more Caused by: Connection refused at Method) at at at at at at ... 20 more

What is the correct setup to make Spark 2.1 work with zeppelin 0.7? Also is there anything else I can turn on to provide more useful diagnostics?


Re: Spark 2.1 with Zeppelin 0.7.0 on HDP 2.6


@Steve Severance, can you please explain how did you install spark 2.1 and zeppelin? did you use Ambari to install ?

You can also try Hwx Cloud to start Spark 2.1 and zeppelin as below.

Re: Spark 2.1 with Zeppelin 0.7.0 on HDP 2.6

Rising Star

Spark 2.1 and Zeppelin 0.7 can be run as standalone as follows:

Spark Installation

Do the followings:

1.Go to and download the latest file.

2.Unzip the file to the appropriate location.

3.Read and follow the instruction.

4.After the installation, go to Spark's bin directory in the command window and run spark-shell to see scala prompt. You can close the command window.

*above step 3 summary:

- Download winutils.exe binary from repository. (You should select the version of Hadoop the Spark distribution.)

- Save winutils.exe binary to a directory of your choice, e.g. c:\hadoop\bin.

- Set HADOOP_HOME to reflect the directory with winutils.exe (without bin). e.g. set HADOOP_HOME=c:\hadoop

- Set PATH environment variable to include %HADOOP_HOME%\bin as follows: set PATH=%HADOOP_HOME%\bin;%PATH%

- Create c:\tmp\hive directory.

- Execute winutils.exe chmod -R 777\tmp\hive command and the check with winutils.exe ls \tmp\hive command.

Zeppelin Installation

Do the followings:

1.Go to and download the latest file.

2.Unzip the file to the appropriate location.

3.Go to

4.Copy the content of interpreter.json and save it into conf/interpreter.json file. If you don't find the file in conf directory, create it.

5.Learn how to start and stop Zeppelin in

6.Go to http://localhost:8080 and click anonymous user at the top/right and click Interpreter. Look for Spark section and click edit button at the right.

7.Update master value to local[*], save, and restart the Spark interpreter. Restart button is next to edit button.

8.Don't use the tutorial. It does not work; instead, use Spark's latest tutorial:

9.When you code with scala, you don't need to specify any interpreter such as %xyz, but use %sql when you use Spark SQL.

Re: Spark 2.1 with Zeppelin 0.7.0 on HDP 2.6

@steve Severance

I have tested out the same at my site with HDP 2.6 and Ambari I did not change any configs - all default. This works fine my end. See the screenshot below:


It would be nice to know:

a) how you did install HDP 2.6 - is it a fresh install or an upgrade

b) what you changed in terms of spark2 service, zeppelin service, spark2 interpreter

Re: Spark 2.1 with Zeppelin 0.7.0 on HDP 2.6

New Contributor

I had done an upgrade from 2.5. I removed zeppelin entirely and reinstalled it on a different host and now it works fine.