<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: completed containers does not freed -  Yarn resources not freed until entire job is completed in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/completed-containers-does-not-freed-Yarn-resources-not-freed/m-p/236723#M198536</link>
    <description>&lt;P&gt;For this you need to use the spark dynamic allocation.&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Dynamic Allocation (of Executors)&lt;/STRONG&gt; (aka &lt;EM&gt;Elastic Scaling&lt;/EM&gt;) is a Spark feature that allows for adding or removing &lt;A href="https://jaceklaskowski.gitbooks.io/mastering-apache-spark/spark-Executor.html"&gt;Spark executors&lt;/A&gt; dynamically to match the workload.&lt;/P&gt;&lt;P&gt;Unlike the "traditional" static allocation where a Spark application reserves CPU and memory resources upfront (irrespective of how much it may eventually use), in dynamic allocation you get as much as needed and no more. It scales the number of executors up and down based on workload, i.e. idle executors are removed, and when there are pending tasks waiting for executors to be launched on, dynamic allocation requests them&lt;/P&gt;</description>
    <pubDate>Fri, 29 Mar 2019 11:43:22 GMT</pubDate>
    <dc:creator>chennuri_gouris</dc:creator>
    <dc:date>2019-03-29T11:43:22Z</dc:date>
  </channel>
</rss>

