Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

spark job failure with no space left on device

spark job failure with no space left on device

I am running the spark job which is leading to failure of job with the error no space left on the device however there is enough space available in the device . i have checked with df -h and df -i command and no issue with the space I see .

Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 656 in stage 11.0 failed 4 times, most recent failure: Lost task 656.3 in stage 11.0 (TID 680, I<workernode>): java.io.IOException: No space left on device

1 REPLY 1
Highlighted

Re: spark job failure with no space left on device

Hi @Anurag Mishra

Spark keeps intermediate files in /tmp, where it likely ran out of space. You can either adjust spark.local.dir or set this at submission time, to a different directory with more space. Try the same job while adding in this during spark-submit; --conf "spark.local.dir=/directory/with/space"
If that works well, you can change this permanently by adding this property to the custom spark defaults in ambari; spark.local.dir=/directory/with/space

See also: https://spark.apache.org/docs/latest/configuration.html#application-properties

Don't have an account?
Coming from Hortonworks? Activate your account here