Support Questions

Find answers, ask questions, and share your expertise

Does jar files missing for spark interpreter?

avatar
Contributor

Hi,

I am facing a strange behaviour in sparK;

I am following the tutotial (lab 4 -spark analysis) but when executing the command the script is returned as you can see on the picture(red rectangle)

I have tables in My hive database default.

I checked the jar files in my spark interpreter and found jar files as you can see on picture sparkjarfiles.png (Are some jar files missing please?

Any suggestion?hivedisplay.pnghicecontextdisplay.png

1 ACCEPTED SOLUTION

avatar

@Oriane

Can you provide the following:

1. As @Bernhard Walter already asked, can you attach the screenshot of your spark interpreter config from Zeppelin UI

2. Create a new Notebook and run the below and send the output:

%sh
whoami

3. Can you attach the output of

$ ls -lrt /usr/hdp/current/zeppelin-server/local-repo

4. Is your cluster Kerberized?

View solution in original post

23 REPLIES 23

avatar
Master Mentor

@Oriane

By any chance do you have any extra line before the "%spark" or if there is any special character (my be while copying and pasting it might have come). Can you manually write those lines of %spark script freshly and then test again?

avatar
Master Mentor

12818-zeppelin.png

Ideally it should work.

avatar
Contributor

Hi @Jay,

I checked but no extra line before "%spark"

I manually write but still facing the problemhicecontextdisplay-2.png

avatar

Side note: in HDP Zeppelin sqlContext defaults to hiveContext. So something like

%spark 
sqlContext.sql("show tables").collect.foreach(println)

should work.

Alternatively:

%sql
show tables

As Jay mentioned, the % needs to be in the first line

avatar
Contributor

Hi @Bernhard, I have tried both but facing the same problem.

Maybe jar files missing in my spark interpreter?

sqlnotwork.pngsqlcontexttest.png

avatar

Does Zeppelin send to Spark Interpreter at all?

What is

%spark 
print(sc.version)

printing? No hiveContext necessary

avatar
Contributor

Hi @Bernhard, Excuse me not to have respected what you said before concerning the fisrt line.

the interpreter %jdbc(hive) and %sql are working well because the "show tables" display the result expected.

The problem i have is with %spark(respecting the first line) and I have errors as you can see on the joined picture.

In the interpreter binding, I have selected spark and save

sparkproblem.png

sparkproblem1.png

sparkconfiguration.png

I joined also the spark interpreter configuration

avatar

and what are the Interpreter settings saying. Here are mine.

12827-interpreter-settings.png

Note: a simple "python" in zeppelin.pyspark.python is also OK.

... by the way, I have the same libs in my zeppelin libs spark folder.

Have you tried to restart the interpreter?

avatar

@Oriane

Can you provide the following:

1. As @Bernhard Walter already asked, can you attach the screenshot of your spark interpreter config from Zeppelin UI

2. Create a new Notebook and run the below and send the output:

%sh
whoami

3. Can you attach the output of

$ ls -lrt /usr/hdp/current/zeppelin-server/local-repo

4. Is your cluster Kerberized?