Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

spark.yarn.executor.memoryOverhead

SOLVED Go to solution

spark.yarn.executor.memoryOverhead

Explorer

Got below error

17/09/12 20:41:36 WARN yarn.YarnAllocator: Container killed by YARN for exceeding memory limits. 1.5 GB of 1.5 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.

17/09/12 20:41:39 ERROR cluster.YarnClusterScheduler: Lost executor 1 on xyz.com: remote Akka client disassociated

Please help as not able to find spark.executor.memory or spark.yarn.executor.memoryOverhead in Cloudera Manager (Cloudera Enterprise 5.4.7)

1 ACCEPTED SOLUTION

Accepted Solutions
Highlighted

Re: spark.yarn.executor.memoryOverhead

Expert Contributor

spark.executor.memory  can be found in Cloudera Manager under Hive->configuration and search for Java Heap.

 

Spark Executor Maximum Java Heap Size
spark.executor.memory
HiveServer2 Default Group

256 MiB

Spark Driver Maximum Java Heap Size
spark.driver.memory
HiveServer2 Default Group

256 MiB

4 REPLIES 4

Re: spark.yarn.executor.memoryOverhead

Champion
These can be set globally, try searching for just spark memory as CM doesn't always include the actual setting name.

These can be set per job as well. Spark-submit --executor-memory

https://spark.apache.org/docs/1.6.0/submitting-applications.html
Highlighted

Re: spark.yarn.executor.memoryOverhead

Expert Contributor

spark.executor.memory  can be found in Cloudera Manager under Hive->configuration and search for Java Heap.

 

Spark Executor Maximum Java Heap Size
spark.executor.memory
HiveServer2 Default Group

256 MiB

Spark Driver Maximum Java Heap Size
spark.driver.memory
HiveServer2 Default Group

256 MiB

Re: spark.yarn.executor.memoryOverhead

Explorer

Thank you.
Additional query, do you know why these spark configs are placed under hive?

Re: spark.yarn.executor.memoryOverhead

Contributor

It's a spark side configuraion. So you can always specify it via "--conf" option with spark-submit, or you can set the property globally on CM via "Spark Client Advanced Configuration Snippet (Safety Valve) for spark-conf/spark-defaults.conf", so CM will include such setting for you via spark gateway client configuration.