Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

SparkContext has been shutdown in Zeppline

Highlighted

SparkContext has been shutdown in Zeppline

New Contributor

I am using %Spark Interpretor and executing a jdbc code to access the phoenix table and renamming the dataframe as "cool"

I am able to printschema of the dataframe.

Now in second paragraph , using %sql interpretor , I am trying to read the dataframe and display the contents using select statement then I am getting the following error

The code is as follows :-

1st paragraph

%spark 

val table = sqlContext.read.format("jdbc").options( Map( "driver" -> "org.apache.phoenix.jdbc.PhoenixDriver", "url" -> "jdbc:phoenix:<hbase_server>:2181:/hbase-unsecure", "dbtable" -> "FBC_DEV_CORR")).load() 

table.registerTempTable("cool")

2nd paragraph

%sqlselect * from cool

Error:-

java.lang.IllegalStateException: SparkContext has been shutdown
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1848)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1869)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1882)
	at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:212)
	at org.apache.spark.sql.execution.Limit.executeCollect(basicOperators.scala:165)
	at org.apache.spark.sql.execution.SparkPlan.executeCollectPublic(SparkPlan.scala:174)
	at org.apache.spark.sql.DataFrame$anonfun$org$apache$spark$sql$DataFrame$execute$1$1.apply(DataFrame.scala:1499)
	at org.apache.spark.sql.DataFrame$anonfun$org$apache$spark$sql$DataFrame$execute$1$1.apply(DataFrame.scala:1499)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56)
	at org.apache.spark.sql.DataFrame.withNewExecutionId(DataFrame.scala:2086)
	at org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$execute$1(DataFrame.scala:1498)
	at org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$collect(DataFrame.scala:1505)
	at org.apache.spark.sql.DataFrame$anonfun$head$1.apply(DataFrame.scala:1375)
	at org.apache.spark.sql.DataFrame$anonfun$head$1.apply(DataFrame.scala:1374)
	at org.apache.spark.sql.DataFrame.withCallback(DataFrame.scala:2099)
	at org.apache.spark.sql.DataFrame.head(DataFrame.scala:1374)
	at org.apache.spark.sql.DataFrame.take(DataFrame.scala:1456)
	at org.apache.spark.sql.DataFrame.showString(DataFrame.scala:170)
	at org.apache.spark.sql.DataFrame.show(DataFrame.scala:350)
	at org.apache.spark.sql.DataFrame.show(DataFrame.scala:311)
	at org.apache.spark.sql.DataFrame.show(DataFrame.scala:319)
	at $iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC.<init>(<console>:40)
	at $iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC.<init>(<console>:45)
	at $iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC.<init>(<console>:47)
	at $iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC.<init>(<console>:49)
	at $iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC.<init>(<console>:51)
	at $iwC$iwC$iwC$iwC$iwC$iwC$iwC.<init>(<console>:53)
	at $iwC$iwC$iwC$iwC$iwC$iwC.<init>(<console>:55)
	at $iwC$iwC$iwC$iwC$iwC.<init>(<console>:57)
	at $iwC$iwC$iwC$iwC.<init>(<console>:59)
	at $iwC$iwC$iwC.<init>(<console>:61)
	at $iwC$iwC.<init>(<console>:63)
	at $iwC.<init>(<console>:65)
	at <init>(<console>:67)
	at .<init>(<console>:71)
	at .<clinit>(<console>)
	at .<init>(<console>:7)
	at .<clinit>(<console>)
	at $print(<console>)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
	at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
	at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
	at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
	at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
	at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:38)
	at org.apache.zeppelin.spark.SparkInterpreter.interpret(SparkInterpreter.java:717)
	at org.apache.zeppelin.spark.SparkInterpreter.interpretInput(SparkInterpreter.java:928)
	at org.apache.zeppelin.spark.SparkInterpreter.interpret(SparkInterpreter.java:871)
	at org.apache.zeppelin.spark.SparkInterpreter.interpret(SparkInterpreter.java:864)
	at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:94)
	at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:341)
	at org.apache.zeppelin.scheduler.Job.run(Job.java:176)
	at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745) 

Note:-

I tried accessing the table in paragraph 1 itself by following code but getting the same error.

val roger = sqlContext.sql("select * from cool limit 10")
roger.show()
2 REPLIES 2

Re: SparkContext has been shutdown in Zeppline

New Contributor

Looks like there an exception causing SparkContext to shutdown. Can you check which application has "FAILED" application in the YARN Resource Manager UI. Click through and you can find the logs of individual containers, which should show some failure.

(My hunch is its probably an unhandled exception while reading from the jdbc connection)

Re: SparkContext has been shutdown in Zeppline

New Contributor

I encountered the same error and I could not find anything relevant in the logs. Eventually I restarted Zeppelin in Ambari and then restarted spark interpreter. Not the most elegant solution, but it worked for me

Don't have an account?
Coming from Hortonworks? Activate your account here