<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: How many number of executors will be created for a spark application? in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/How-many-number-of-executors-will-be-created-for-a-spark/m-p/186505#M148607</link>
    <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/16783/saranpons3.html" nodeid="16783"&gt;@Saravanan Selvam&lt;/A&gt;, In yarn mode you can control the total number of executors needed for an application with --num-executor option. &lt;/P&gt;&lt;P&gt;However, if you do not explicitly specify --num-executor for spark application in yarn mode, it would typically start one executor on each Nodemanager. &lt;/P&gt;&lt;P&gt;Spark also has a feature called Dynamic resource allocation. It gives spark application a feature to dynamically scale the set of cluster resources allocated to your application up and down based on the workload. This way you can make sure that application is not over utilizing the resources. &lt;/P&gt;&lt;P&gt;&lt;A href="http://spark.apache.org/docs/1.2.0/job-scheduling.html#dynamic-resource-allocation" target="_blank"&gt;http://spark.apache.org/docs/1.2.0/job-scheduling.html#dynamic-resource-allocation&lt;/A&gt;&lt;/P&gt;</description>
    <pubDate>Wed, 05 Apr 2017 01:53:51 GMT</pubDate>
    <dc:creator>yvora</dc:creator>
    <dc:date>2017-04-05T01:53:51Z</dc:date>
  </channel>
</rss>

