<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: spark.yarn.executor.memoryOverhead in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/spark-yarn-executor-memoryOverhead/m-p/59927#M23944</link>
    <description>These can be set globally, try searching for just spark memory as CM doesn't always include the actual setting name.&lt;BR /&gt;&lt;BR /&gt;These can be set per job as well. Spark-submit --executor-memory&lt;BR /&gt;&lt;BR /&gt;&lt;A href="https://spark.apache.org/docs/1.6.0/submitting-applications.html" target="_blank"&gt;https://spark.apache.org/docs/1.6.0/submitting-applications.html&lt;/A&gt;</description>
    <pubDate>Thu, 14 Sep 2017 18:22:12 GMT</pubDate>
    <dc:creator>mbigelow</dc:creator>
    <dc:date>2017-09-14T18:22:12Z</dc:date>
    <item>
      <title>spark.yarn.executor.memoryOverhead</title>
      <link>https://community.cloudera.com/t5/Support-Questions/spark-yarn-executor-memoryOverhead/m-p/59909#M23943</link>
      <description>&lt;P&gt;&lt;FONT face="lucida sans unicode,lucida sans" size="2"&gt;&lt;SPAN&gt;Got below error&lt;BR /&gt;&lt;BR /&gt;&lt;EM&gt;17/09/12 20:41:36 WARN yarn.YarnAllocator: Container killed by YARN for exceeding memory limits. 1.5 GB of 1.5 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.&lt;/EM&gt;&lt;/SPAN&gt;&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="lucida sans unicode,lucida sans" size="2"&gt;&lt;SPAN&gt;&lt;EM&gt;17/09/12 20:41:39 ERROR cluster.YarnClusterScheduler: Lost executor 1 on xyz.com: remote Akka client disassociated&lt;/EM&gt;&lt;BR /&gt;&lt;BR /&gt;Please help as not able to find &lt;STRONG&gt;spark.executor.memory&lt;/STRONG&gt; or &lt;STRONG&gt;spark.yarn.executor.memoryOverhead&lt;/STRONG&gt; in Cloudera Manager&amp;nbsp;(Cloudera Enterprise 5.4.7)&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 16 Sep 2022 12:14:38 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/spark-yarn-executor-memoryOverhead/m-p/59909#M23943</guid>
      <dc:creator>joyabrata</dc:creator>
      <dc:date>2022-09-16T12:14:38Z</dc:date>
    </item>
    <item>
      <title>Re: spark.yarn.executor.memoryOverhead</title>
      <link>https://community.cloudera.com/t5/Support-Questions/spark-yarn-executor-memoryOverhead/m-p/59927#M23944</link>
      <description>These can be set globally, try searching for just spark memory as CM doesn't always include the actual setting name.&lt;BR /&gt;&lt;BR /&gt;These can be set per job as well. Spark-submit --executor-memory&lt;BR /&gt;&lt;BR /&gt;&lt;A href="https://spark.apache.org/docs/1.6.0/submitting-applications.html" target="_blank"&gt;https://spark.apache.org/docs/1.6.0/submitting-applications.html&lt;/A&gt;</description>
      <pubDate>Thu, 14 Sep 2017 18:22:12 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/spark-yarn-executor-memoryOverhead/m-p/59927#M23944</guid>
      <dc:creator>mbigelow</dc:creator>
      <dc:date>2017-09-14T18:22:12Z</dc:date>
    </item>
    <item>
      <title>Re: spark.yarn.executor.memoryOverhead</title>
      <link>https://community.cloudera.com/t5/Support-Questions/spark-yarn-executor-memoryOverhead/m-p/59941#M23945</link>
      <description>&lt;P&gt;&lt;SPAN&gt;spark.executor.memory &amp;nbsp;can be found in Cloudera Manager under Hive-&amp;gt;configuration and search for Java Heap.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Spark Executor Maximum Java Heap Size&lt;BR /&gt;spark.executor.memory&lt;BR /&gt;HiveServer2 Default Group&lt;/P&gt;&lt;P&gt;256 MiB&lt;BR /&gt;&lt;BR /&gt;Spark Driver Maximum Java Heap Size&lt;BR /&gt;spark.driver.memory&lt;BR /&gt;HiveServer2 Default Group&lt;/P&gt;&lt;P&gt;256 MiB&lt;/P&gt;</description>
      <pubDate>Fri, 15 Sep 2017 01:43:57 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/spark-yarn-executor-memoryOverhead/m-p/59941#M23945</guid>
      <dc:creator>ebeb</dc:creator>
      <dc:date>2017-09-15T01:43:57Z</dc:date>
    </item>
    <item>
      <title>Re: spark.yarn.executor.memoryOverhead</title>
      <link>https://community.cloudera.com/t5/Support-Questions/spark-yarn-executor-memoryOverhead/m-p/59966#M23946</link>
      <description>&lt;P&gt;Thank you.&lt;BR /&gt;Additional query, do you know why these spark configs are placed under hive?&lt;/P&gt;</description>
      <pubDate>Fri, 15 Sep 2017 11:37:05 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/spark-yarn-executor-memoryOverhead/m-p/59966#M23946</guid>
      <dc:creator>joyabrata</dc:creator>
      <dc:date>2017-09-15T11:37:05Z</dc:date>
    </item>
    <item>
      <title>Re: spark.yarn.executor.memoryOverhead</title>
      <link>https://community.cloudera.com/t5/Support-Questions/spark-yarn-executor-memoryOverhead/m-p/60002#M23947</link>
      <description>&lt;P&gt;It's a spark side configuraion. So you can always specify it via "--conf" option with spark-submit, or you can set the property globally on CM via "&lt;SPAN&gt;Spark Client Advanced Configuration Snippet (Safety Valve) for spark-conf/spark-defaults.conf", so CM will include such setting for you via spark gateway client configuration.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Sun, 17 Sep 2017 05:27:35 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/spark-yarn-executor-memoryOverhead/m-p/60002#M23947</guid>
      <dc:creator>Yuexin Zhang</dc:creator>
      <dc:date>2017-09-17T05:27:35Z</dc:date>
    </item>
    <item>
      <title>Re: spark.yarn.executor.memoryOverhead</title>
      <link>https://community.cloudera.com/t5/Support-Questions/spark-yarn-executor-memoryOverhead/m-p/286860#M212709</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;This parameter&amp;nbsp;spark.executor.memory (or) spark.yarn.executor.memoryOverhead can be set in Spark submit command or you can set it Advanced configurations.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;&lt;P&gt;AKR&lt;/P&gt;</description>
      <pubDate>Sun, 05 Jan 2020 14:33:57 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/spark-yarn-executor-memoryOverhead/m-p/286860#M212709</guid>
      <dc:creator>AKR</dc:creator>
      <dc:date>2020-01-05T14:33:57Z</dc:date>
    </item>
  </channel>
</rss>

