06-07-2016 11:06 AM
I am using cloudera 5.7.0 . and running spark streaming application using kafka which doing some opencv operation .
some of my containers killed by Yarn with below reason :
ExecutorLostFailure (executor 1 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 3.1 GB of 3 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead
i am using below configuarion .
spark-submit --num-executors 20 --executor-memory 2g --executor-cores 2 - --conf spark.yarn.executor.memoryOverhead=1000
how can i solve this issue
06-07-2016 12:31 PM
This means the JVM took more memory than YARN thought it should. Usually this means you need to allocate more overhead, so that more memory is requested from YARN for the same size of JVM heap. See the spark.yarn.executor.memoryOverhead option, which defaults to 10% of the specified executor memory. Increase it.
11-22-2017 05:01 AM - edited 11-22-2017 05:03 AM
I am having the same issues.
Cloudera Express 5.7.1
ExecutorLostFailure (executor 60 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 1.5 GB of 1.5 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.
I see your solution but cannot find where that is in CM.
Can you please point me where that option is in Cloudera Manager UI?
11-22-2017 05:42 AM
This has nothing to do with CM. It has to do with your app's memory configuration. The relevant settings are right there in the error.
11-22-2017 05:51 AM
So how can I increase the overhead in Jupyter Notebook?
I am not using spark-submit for this job.
And how could I find out, what are current overhead settings?
11-22-2017 05:56 AM
I'm not sure how you would do that. We support spark-submit and the Workbench, not Jupyter. It's clear how to configure spark-submit, and you configure the workbench with spark-defaults.conf. You can see your Spark job's config in its UI, in the environment tab.