<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: YARN Memory in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/YARN-Memory/m-p/95825#M59181</link>
    <description>&lt;P&gt;@&lt;A href="http://community.hortonworks.com/users/191/bsaini.html"&gt;bsaini@hortonworks.com&lt;/A&gt;&lt;/P&gt;&lt;P&gt;Continue to the above explanation of Container expiring &lt;/P&gt;&lt;P&gt; Very good explanation in this &lt;A href="http://hortonworks.com/blog/how-to-plan-and-configure-yarn-in-hdp-2-0/"&gt;blog&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&lt;A href="http://hortonworks.com/blog/how-to-plan-and-configure-yarn-in-hdp-2-0/"&gt;&lt;/A&gt;&lt;/P&gt;&lt;P&gt;"With YARN and MapReduce 2, &lt;STRONG&gt;&lt;EM&gt;there are no longer pre-configured static slots for Map and Reduce tasks&lt;/EM&gt;&lt;/STRONG&gt;. The entire cluster is available for dynamic resource allocation of Maps and Reduces as needed by the job"&lt;/P&gt;&lt;PRE&gt;With YARN and MapReduce 2, there are no longer pre-configured static slots for Map and Reduce tasks. The entire cluster is available for dynamic resource allocation of Maps and Reduces as needed by the job. In our example cluster, with the above configurations, YARN will be able to allocate on each node up to 10 mappers (40/4) or 5 reducers (40/8) or a permutation within that. &lt;/PRE&gt;</description>
    <pubDate>Fri, 23 Oct 2015 21:42:32 GMT</pubDate>
    <dc:creator>nsabharwal</dc:creator>
    <dc:date>2015-10-23T21:42:32Z</dc:date>
    <item>
      <title>YARN Memory</title>
      <link>https://community.cloudera.com/t5/Support-Questions/YARN-Memory/m-p/95823#M59179</link>
      <description>&lt;P&gt;When a certain amount of memory is given to ResourceManager (Memory allocated for all YARN containers on a node), is it immediately blocked or gradually/progressively used on as-needed basis until that capacity is reached? &lt;/P&gt;</description>
      <pubDate>Fri, 23 Oct 2015 02:33:25 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/YARN-Memory/m-p/95823#M59179</guid>
      <dc:creator>bsaini</dc:creator>
      <dc:date>2015-10-23T02:33:25Z</dc:date>
    </item>
    <item>
      <title>Re: YARN Memory</title>
      <link>https://community.cloudera.com/t5/Support-Questions/YARN-Memory/m-p/95824#M59180</link>
      <description>&lt;P&gt;@bsaini@hortonworks.com&lt;/P&gt;&lt;P&gt;This may help..&lt;A target="_blank" href="http://hortonworks.com/blog/apache-hadoop-yarn-resourcemanager/"&gt;link&lt;/A&gt;&lt;/P&gt;&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;ContainerAllocationExpirer&lt;/STRONG&gt;: This component is in charge of ensuring that all allocated containers are used by AMs and subsequently launched on the correspond NMs. AMs run as untrusted user code and can potentially hold on to allocations without using them, and as such can cause cluster under-utilization. To address this, ContainerAllocationExpirer maintains the list of allocated containers that are still not used on the corresponding NMs. For any container, if the corresponding NM doesn’t report to the RM that the container has started running within a configured interval of time, by default 10 minutes, the container is deemed as dead and is expired by the RM.&lt;/LI&gt;&lt;/UL&gt;</description>
      <pubDate>Fri, 23 Oct 2015 02:45:38 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/YARN-Memory/m-p/95824#M59180</guid>
      <dc:creator>nsabharwal</dc:creator>
      <dc:date>2015-10-23T02:45:38Z</dc:date>
    </item>
    <item>
      <title>Re: YARN Memory</title>
      <link>https://community.cloudera.com/t5/Support-Questions/YARN-Memory/m-p/95825#M59181</link>
      <description>&lt;P&gt;@&lt;A href="http://community.hortonworks.com/users/191/bsaini.html"&gt;bsaini@hortonworks.com&lt;/A&gt;&lt;/P&gt;&lt;P&gt;Continue to the above explanation of Container expiring &lt;/P&gt;&lt;P&gt; Very good explanation in this &lt;A href="http://hortonworks.com/blog/how-to-plan-and-configure-yarn-in-hdp-2-0/"&gt;blog&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&lt;A href="http://hortonworks.com/blog/how-to-plan-and-configure-yarn-in-hdp-2-0/"&gt;&lt;/A&gt;&lt;/P&gt;&lt;P&gt;"With YARN and MapReduce 2, &lt;STRONG&gt;&lt;EM&gt;there are no longer pre-configured static slots for Map and Reduce tasks&lt;/EM&gt;&lt;/STRONG&gt;. The entire cluster is available for dynamic resource allocation of Maps and Reduces as needed by the job"&lt;/P&gt;&lt;PRE&gt;With YARN and MapReduce 2, there are no longer pre-configured static slots for Map and Reduce tasks. The entire cluster is available for dynamic resource allocation of Maps and Reduces as needed by the job. In our example cluster, with the above configurations, YARN will be able to allocate on each node up to 10 mappers (40/4) or 5 reducers (40/8) or a permutation within that. &lt;/PRE&gt;</description>
      <pubDate>Fri, 23 Oct 2015 21:42:32 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/YARN-Memory/m-p/95825#M59181</guid>
      <dc:creator>nsabharwal</dc:creator>
      <dc:date>2015-10-23T21:42:32Z</dc:date>
    </item>
    <item>
      <title>Re: YARN Memory</title>
      <link>https://community.cloudera.com/t5/Support-Questions/YARN-Memory/m-p/95826#M59182</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/191/bsaini.html" nodeid="191"&gt;@bsaini&lt;/A&gt; are you still having issues with this? Can you accept the best answer or provide your own solution?&lt;/P&gt;</description>
      <pubDate>Wed, 03 Feb 2016 05:27:07 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/YARN-Memory/m-p/95826#M59182</guid>
      <dc:creator>aervits</dc:creator>
      <dc:date>2016-02-03T05:27:07Z</dc:date>
    </item>
  </channel>
</rss>

