Support Questions

Find answers, ask questions, and share your expertise

Spark Streaming: FileNotFoundException on files included in --jars after running a few days

avatar
Expert Contributor

CDH 5.5.1 installed with parcels, CentOS 6.7

 

I have a spark streaming job which used Phoenix (jar phoenix-1.2.0-client.jar). After the job ran for a few days, it tried to reload the jar and got a FileNotFoundException.

Command used to start job

nohup spark-submit --master yarn --deploy-mode client --class com.myCompany.MyStreamProc --driver-class-path /opt/mycompany/my-spark.jar:/opt/cloudera/parcels/CLABS_PHOENIX/lib/phoenix/phoenix-1.2.0-client.jar:... --jars /opt/mycompany/my-spark.jar,/opt/cloudera/parcels/CLABS_PHOENIX/lib/phoenix/phoenix-1.2.0-client.jar,... my-spark.jar


Log entry around FileNotFoundException in Driver Log

[INFO] 2016-05-28 15:28:00,052 org.apache.spark.scheduler.TaskSetManager logInfo - Starting task 69.0 in stage 27723.0 (TID 1692793, node3.mycompany.com, partition 69,NODE_LOCAL, 2231 bytes)
[INFO] 2016-05-28 15:28:00,205 org.apache.spark.storage.BlockManagerInfo logInfo - Added input-0-1464420480000 in memory on node1.mycompany.com:47601 (size: 15.0 KB, free: 302.0 MB)
[INFO] 2016-05-28 15:28:00,213 org.apache.spark.storage.BlockManagerInfo logInfo - Added input-0-1464420480000 in memory on node2.mycompany.com:42510 (size: 15.0 KB, free: 308.7 MB)
[INFO] 2016-05-28 15:28:00,351 org.apache.spark.scheduler.TaskSetManager logInfo - Starting task 70.0 in stage 27723.0 (TID 1692794, node2.mycompany.com, partition 70,NODE_LOCAL, 2231 bytes)
[WARN] 2016-05-28 15:28:00,391 org.apache.spark.scheduler.TaskSetManager logWarning - Lost task 69.0 in stage 27723.0 (TID 1692793, node2.mycompany.com): java.io.FileNotFoundException: http://192.168.88.28:55310/jars/phoenix-1.2.0-client.jar
        at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1624)
        at org.apache.spark.util.Utils$.doFetchFile(Utils.scala:556)
        at org.apache.spark.util.Utils$.fetchFile(Utils.scala:356)
        at org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$5.apply(Executor.scala:405)
        at org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$5.apply(Executor.scala:397)
        at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
        at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
        at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
        at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
        at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
        at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
        at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
        at org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$updateDependencies(Executor.scala:397)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:193)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)

(Note: node3.mycompany.com = 192.168.88.28)


According to Executor logs, when job started (2016-05-09) they downloaded http://192.168.88.28:55310/jars/phoenix-1.2.0-client.jar successfully.

 

Seems to me somehow Spark wants to reload the jar, but it was missing. Any suggestion? Is the job running too long (nearly 20 days already)?

1 ACCEPTED SOLUTION

avatar
Rising Star

See the Environment tab of Job History UI and locate "spark.local.dir".

Yes that is the expected behaviour as JAR is required to the executors.

View solution in original post

6 REPLIES 6

avatar
Rising Star

This looks weird. And can you confirm that 

 

http://192.168.88.28:55310/jars/phoenix-1.2.0-client.jar

is still not present?

 

Spark keeps all JARs specified by --jars option in job's temp directory on each executor nodes [1]. There must be some sort of OS settings which lead the deletion of existing phoenix jar from temp and when Spark Context is unable to find it at its usual location it tries to download it from the given location. However this should not happen until the temp directory is actively accessed by the job or process.

 

You can try bundling that JAR with your Spark JAR and then refer it in spark-submit. 

I suspect, you will need again 20 odd days to test this workaround 🙂

avatar
Expert Contributor
It is definitely not present. Actually, I forgot to say, the spark streaming job killed itself after the FileNotFoundException.

Where is the job's temp directory? Or, where was it configured?

avatar
Rising Star

See the Environment tab of Job History UI and locate "spark.local.dir".

Yes that is the expected behaviour as JAR is required to the executors.

avatar
Expert Contributor

I can't found spark.local.dir in either Job History UI (got OutOfMemoryException and all job history gone after restart) or Application UI. However, according to documentation, spark.local.dir is /tmp by default, and the jar files are found in /tmp/spark-.../ . So the FileNotFoundException is likely caused by housekeeping /tmp.

avatar
New Contributor

Have you fixed this issue? I am sufferring the same issue in the spark streaming application on 1.6.2

avatar
Expert Contributor

The cause of my case was described in Message 4-5 of the thread. Here are some possible solutions

  • set spark.local.dir to somewhere else outside /tmp . Refer to Spark Configuration for how to configure the value.
  • disable housekeeping of /tmp/spark-... 
  • periodic restart your spark streaming job