I need to increase the yarn memory overhead for my Spark application to avoid CC memory exceptions but when I run on the cluster the yarn cant run it. Can I increase it on the horthonwork cluster in yarn configurations and then where?
Which version of HDP (yarn) are you using?
when I used HDP 2.4 I did see how to set spark.yarn.executor.memoryOverhead.
In HDP 2.6 I can't find in the ambari gui the place to set it.
How did you set it ?