<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Executing Spark-submit with yarn-cluster mode  and got OOM in driver with HiveContext in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/Executing-Spark-submit-with-yarn-cluster-mode-and-got-OOM-in/m-p/166903#M129240</link>
    <description>&lt;P&gt;Spark allocates memory based on option parameters, which can be passed in multiple ways: &lt;/P&gt;&lt;P&gt;1) via the command-line (as you do)&lt;/P&gt;&lt;P&gt;2) via programmatic instructions&lt;/P&gt;&lt;P&gt;3) via the "spark-defaults.conf" file in the "conf" directory under your $SPARK_HOME&lt;/P&gt;&lt;P&gt;Second, there are separate config params for the driver and the executors. This is important, because the main difference between "yarn-client" and "yarn-cluster" mode is where the Driver lives (either on the client, or on cluster within the AppMaster). Therefore, we should look at your driver config parameters.&lt;/P&gt;&lt;P&gt;It looks like these are your driver-related options from the command-line:&lt;/P&gt;&lt;PRE&gt;--driver-memory 5000m 
--driver-cores 2 
--conf spark.yarn.driver.memoryOverhead=1024 
--conf spark.driver.maxResultSize=5g 
--driver-java-options "-XX:MaxPermSize=1000m"&lt;/PRE&gt;&lt;P&gt;It is possible that the AppMaster is running on a node that does not have enough memory to support your option requests, e.g. that the sum of driver-memory (5G) and PermSize (1G), plus overhead (1G) does not fit on the node. I would try lowering the --driver-memory by 1G steps until you no longer get the OOM error.&lt;/P&gt;</description>
    <pubDate>Thu, 14 Apr 2016 02:07:18 GMT</pubDate>
    <dc:creator>phargis</dc:creator>
    <dc:date>2016-04-14T02:07:18Z</dc:date>
  </channel>
</rss>

