Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Spark org.apache.spark.SparkException: Task failed while writing rows with spark session

Highlighted

Spark org.apache.spark.SparkException: Task failed while writing rows with spark session

New Contributor

Hi,

I am running my spark job on emr cluster with executor memory 6g, driver memory 5g and memoryoverhead 1g.

But my task is failing with below error while writing into hdfs using spak session. i am storing file in orc format with snappy compression.

Error:

at
org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)<br>Caused
by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 42
in stage 11.0 failed 4 times, most recent failure: Lost task 42.3 in stage 11.0
(TID 3170, "server_IP", executor 23):
ExecutorLostFailure (executor 23 exited caused by one of the running tasks)
Reason: Container killed by YARN for exceeding memory limits. 8.2 GB of 6.6 GB
physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.

Could you please give some suggestion.

thanks,

9 REPLIES 9

Re: Spark org.apache.spark.SparkException: Task failed while writing rows with spark session

Contributor

Hi Mani,

Consider boosting spark.yarn.executor.memoryOverhead from 6.6 GB to something higher than 8.2 GB, by adding "--conf spark.yarn.executor.memoryOverhead=10GB" to the spark-submit command. You could also workaround this by increasing the number of partitions (repartitioning) and number of executors.

Re: Spark org.apache.spark.SparkException: Task failed while writing rows with spark session

New Contributor

Hi dbompart, Thanks for your suggestion.

i have tried spark job with spark.yarn.executor.memoryOverhead =10g. But still the job fails with same issue.

ExecutorLostFailure (executor 19 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 8.4 GB of 6.6 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead

And i have tried the work around. Increased partition from 66 (which was before re-partitioning the DF) to 200 using repartition . Still it doesnt work and it takes more time than the previous one(66 partition) since it shuffles data.

Could you please help here...

Re: Spark org.apache.spark.SparkException: Task failed while writing rows with spark session

Contributor

Sure, can you share your spark-submit command with the arguments as well? Mask any sensitive information please.

Re: Spark org.apache.spark.SparkException: Task failed while writing rows with spark session

New Contributor

Hi,

spark-submit --master yarn --deploy-mode client --driver-memory 5g --executor-memory 6g --conf "spark.yarn.executor.memoryOverhead=10g" --class myclass myjar.jar param1 param1 param3 param4 param5

Re: Spark org.apache.spark.SparkException: Task failed while writing rows with spark session

Contributor

Hi Mani, use - - executor-memory 10g instead of 6g, and remove the overHead config property.

Re: Spark org.apache.spark.SparkException: Task failed while writing rows with spark session

New Contributor

Hi dbompart,

Thank you for your help. the option didn't help. when i ran the job with executor-memory=10g, job failed with same error and size has changed like (11.8 GB of 10 GB physical memory used.).

spark-submit --master yarn --deploy-mode client --driver-memory 5g --executor-memory 10g --class myclass myjar.jar param1 param1 param3 param4 param5

So i tried with 15 gb of executor memory.

spark-submit --master yarn --deploy-mode client --driver-memory 5g --executor-memory 15g myclass myjar.jar param1 param1 param3 param4 param5

But tasks are taking long time (to find count - it took 1.2 hrs whereas with below 10gb of executor memory, it took 11 mins). Due to this task failed with below error.

ExecutorLostFailure (executor 1 exited caused by one of the running tasks) Reason: Slave lost

Re: Spark org.apache.spark.SparkException: Task failed while writing rows with spark session

Contributor

Hi Mani, you might also want to increase the number of executors then, and may probably be able to lower the memory size. Try with:

spark-submit --master yarn --deploy-mode client --driver-memory 5g --num-executors 6 --executor-memory 8g myclass myjar.jar param1 param1 param3 param4 param5

Tunning this requires lots of other information like input data size, application use case, datasource information, cluster resources available, etc. Keep tunning --num-executors --executor-memory and --executor-cores (5 is usually a good number)

Re: Spark org.apache.spark.SparkException: Task failed while writing rows with spark session

New Contributor

Thanks dbompart and sorry for late reply.

i have tried different option (like number of cores, number of executors , executor memory, overhead memory) , But still same issue.

when i try re-partition before doing action, it takes more time and shuffle read/write has gone till 50 GB (actual size 8.9gb).

will keep trying...

---

Thanks

Re: Spark org.apache.spark.SparkException: Task failed while writing rows with spark session

Cloudera Employee

Hi,

Are you still getting the same error even though after increasing the Overhead memory? Could you please share the Error messages after increasing the Overhead memory / Executor/ Driver memory?

 

Thanks

AK