Support Questions
Find answers, ask questions, and share your expertise

Increasing the memory overhead on the cluster so yarn can run them



I need to increase the yarn memory overhead for my Spark application to avoid CC memory exceptions but when I run on the cluster the yarn cant run it. Can I increase it on the horthonwork cluster in yarn configurations and then where?


Cloudera Employee

How are you increasing the yarn memory overhead? Are you specifying spark.yarn.executor.memoryOverhead property in your spark submit command ?


yes, i ma doing the same.


Which version of HDP (yarn) are you using?

when I used HDP 2.4 I did see how to set spark.yarn.executor.memoryOverhead.

In HDP 2.6 I can't find in the ambari gui the place to set it.

How did you set it ?

Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.