Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Multiple spark Spark jobs failed

avatar
New Contributor

Spark jobs failed after running for ~ 600hrs with the below error: 

 

ERROR util.Utils: Uncaught exception in thread shutdown-hook-0

java.lang.InterruptedException

        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2067)

        at java.util.concurrent.ThreadPoolExecutor.awaitTermination(ThreadPoolExecutor.java:1475)

        at org.apache.spark.streaming.scheduler.JobScheduler.stop(JobScheduler.scala:133)

        at org.apache.spark.streaming.StreamingContext$$anonfun$stop$1.apply$mcV$sp(StreamingContext.scala:686)

        at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1314)

        at org.apache.spark.streaming.StreamingContext.stop(StreamingContext.scala:685)

        at org.apache.spark.streaming.StreamingContext.org$apache$spark$streaming$StreamingContext$$stopOnShutdown(StreamingContext.scala:722)

        at org.apache.spark.streaming.StreamingContext$$anonfun$start$1.apply$mcV$sp(StreamingContext.scala:604)

        at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:214)

        at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ShutdownHookManager.scala:188)

        at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188)

        at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188)

        at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1919)

        at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply$mcV$sp(ShutdownHookManager.scala:188)

        at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188)

        at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188)

        at scala.util.Try$.apply(Try.scala:192)

        at org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:188)

        at org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:178)

        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)

        at java.util.concurrent.FutureTask.run(FutureTask.java:266)

        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

        at java.lang.Thread.run(Thread.java:748)

2 REPLIES 2

avatar
Community Manager

Hello @Albap , Based on log is difficult to fix the issue, we recommend for you to create a support case, thanks!


Regards,

Diana Torres,
Community Moderator


Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.
Learn more about the Cloudera Community:

avatar
Master Collaborator

Hi @Albap 

 

Based on the logs, i can see you have created streaming application. By default streaming application will run 24*7, it will stop only when we kill or some interrupted event happen at the system level. 

 

Better way to kill/shutdown the spark streaming applications is by using graceful shutdown.

 

If you need further help, please raise an cloudera case we will work on.