<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Spark jobs are stuck under YARN Fair Scheduler in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/Spark-jobs-are-stuck-under-YARN-Fair-Scheduler/m-p/287851#M213278</link>
    <description>&lt;P&gt;Hi Sudhinra,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thank you for the update.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Can you share the SparkConf you use for your applications;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The following settings should work for small resource apps (Note dynamic allocation is disabled):&lt;/P&gt;&lt;PRE&gt;&lt;SPAN class="pln"&gt;conf &lt;/SPAN&gt;&lt;SPAN class="pun"&gt;=&lt;/SPAN&gt; &lt;SPAN class="pun"&gt;(&lt;/SPAN&gt;&lt;SPAN class="typ"&gt;SparkConf&lt;/SPAN&gt;&lt;SPAN class="pun"&gt;().&lt;/SPAN&gt;&lt;SPAN class="pln"&gt;setAppName&lt;/SPAN&gt;&lt;SPAN class="pun"&gt;(&lt;/SPAN&gt;&lt;SPAN class="str"&gt;"simple"&lt;/SPAN&gt;&lt;SPAN class="pun"&gt;)&lt;/SPAN&gt;
        &lt;SPAN class="pun"&gt;.&lt;/SPAN&gt;&lt;SPAN class="pln"&gt;set&lt;/SPAN&gt;&lt;SPAN class="pun"&gt;(&lt;/SPAN&gt;&lt;SPAN class="str"&gt;"spark.shuffle.service.enabled"&lt;/SPAN&gt;&lt;SPAN class="pun"&gt;,&lt;/SPAN&gt; &lt;SPAN class="str"&gt;"false"&lt;/SPAN&gt;&lt;SPAN class="pun"&gt;)&lt;/SPAN&gt;
        &lt;SPAN class="pun"&gt;.&lt;/SPAN&gt;&lt;SPAN class="pln"&gt;set&lt;/SPAN&gt;&lt;SPAN class="pun"&gt;(&lt;/SPAN&gt;&lt;SPAN class="str"&gt;"spark.dynamicAllocation.enabled"&lt;/SPAN&gt;&lt;SPAN class="pun"&gt;,&lt;/SPAN&gt; &lt;SPAN class="str"&gt;"false"&lt;/SPAN&gt;&lt;SPAN class="pun"&gt;)&lt;/SPAN&gt;
        &lt;SPAN class="pun"&gt;.&lt;/SPAN&gt;&lt;SPAN class="pln"&gt;set&lt;/SPAN&gt;&lt;SPAN class="pun"&gt;(&lt;/SPAN&gt;&lt;SPAN class="str"&gt;"spark.cores.max"&lt;/SPAN&gt;&lt;SPAN class="pun"&gt;,&lt;/SPAN&gt; &lt;SPAN class="str"&gt;"1"&lt;/SPAN&gt;&lt;SPAN class="pun"&gt;)&lt;/SPAN&gt;
        &lt;SPAN class="pun"&gt;.&lt;/SPAN&gt;&lt;SPAN class="pln"&gt;set&lt;/SPAN&gt;&lt;SPAN class="pun"&gt;(&lt;/SPAN&gt;&lt;SPAN class="str"&gt;"spark.executor.instances"&lt;/SPAN&gt;&lt;SPAN class="pun"&gt;,&lt;/SPAN&gt;&lt;SPAN class="str"&gt;"2"&lt;/SPAN&gt;&lt;SPAN class="pun"&gt;)&lt;/SPAN&gt;
        &lt;SPAN class="pun"&gt;.&lt;/SPAN&gt;&lt;SPAN class="pln"&gt;set&lt;/SPAN&gt;&lt;SPAN class="pun"&gt;(&lt;/SPAN&gt;&lt;SPAN class="str"&gt;"spark.executor.memory"&lt;/SPAN&gt;&lt;SPAN class="pun"&gt;,&lt;/SPAN&gt;&lt;SPAN class="str"&gt;"200m"&lt;/SPAN&gt;&lt;SPAN class="pun"&gt;)&lt;/SPAN&gt;
        &lt;SPAN class="pun"&gt;.&lt;/SPAN&gt;&lt;SPAN class="pln"&gt;set&lt;/SPAN&gt;&lt;SPAN class="pun"&gt;(&lt;/SPAN&gt;&lt;SPAN class="str"&gt;"spark.executor.cores"&lt;/SPAN&gt;&lt;SPAN class="pun"&gt;,&lt;/SPAN&gt;&lt;SPAN class="str"&gt;"1"&lt;/SPAN&gt;&lt;SPAN class="pun"&gt;)&lt;/SPAN&gt;&lt;/PRE&gt;&lt;P&gt;From:&lt;/P&gt;&lt;P&gt;&lt;A href="https://stackoverflow.com/questions/44581585/warn-cluster-yarnscheduler-initial-job-has-not-accepted-any-resources" target="_blank" rel="noopener"&gt;https://stackoverflow.com/questions/44581585/warn-cluster-yarnscheduler-initial-job-has-not-accepted-any-resources&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;PS: Share the number of cores available on your nodes,&amp;nbsp;&lt;SPAN class="str"&gt;spark.executor.cores should not be higher than number of cores available on each node. Also, are you running spark in cluster or client mode?&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;HTH&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Best,&lt;BR /&gt;Lyubomir&lt;/P&gt;</description>
    <pubDate>Fri, 17 Jan 2020 10:05:13 GMT</pubDate>
    <dc:creator>lyubomirangelo</dc:creator>
    <dc:date>2020-01-17T10:05:13Z</dc:date>
    <item>
      <title>Spark jobs are stuck under YARN Fair Scheduler</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Spark-jobs-are-stuck-under-YARN-Fair-Scheduler/m-p/286645#M212556</link>
      <description>&lt;P&gt;Hi,&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I have setup YARN Fair-scheduler in Ambari (HDP 3.1.0.0-78) for "Default" queue itself. So far, I haven't added any new queues.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Now, I want to run a simple job against the queue and when I submit the job, the application state is in "ACCEPTED" state forever.&amp;nbsp; I get the below message in YARN logs:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The additional information is given below.&amp;nbsp; Please help me in fixing this issue at the earliest.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="YARN_AM_Message.PNG" style="width: 999px;"&gt;&lt;img src="https://community.cloudera.com/t5/image/serverpage/image-id/25889i0DB04A9C47371CE1/image-size/large?v=v2&amp;amp;px=999" role="button" title="YARN_AM_Message.PNG" alt="YARN_AM_Message.PNG" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;For "default" queue, the below parameters are set through "fair-scheduler.xml".&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="fair_scheduler_screenshot_2.PNG" style="width: 848px;"&gt;&lt;img src="https://community.cloudera.com/t5/image/serverpage/image-id/25890iE37F2BFA07AD0BDB/image-size/large?v=v2&amp;amp;px=999" role="button" title="fair_scheduler_screenshot_2.PNG" alt="fair_scheduler_screenshot_2.PNG" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;Also, no jobs are currently running, apart from the one that I have launched.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="yarn_job_status.PNG" style="width: 999px;"&gt;&lt;img src="https://community.cloudera.com/t5/image/serverpage/image-id/25891i445BD79B11B39F99/image-size/large?v=v2&amp;amp;px=999" role="button" title="yarn_job_status.PNG" alt="yarn_job_status.PNG" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;Given below is the screenshot, which confirms that the maximum AM resource percent is greater than 0.1&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Scheduler_AM_Percent.PNG" style="width: 863px;"&gt;&lt;img src="https://community.cloudera.com/t5/image/serverpage/image-id/25892iA72D112F68F9FE5B/image-size/large?v=v2&amp;amp;px=999" role="button" title="Scheduler_AM_Percent.PNG" alt="Scheduler_AM_Percent.PNG" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 31 Dec 2019 09:09:09 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Spark-jobs-are-stuck-under-YARN-Fair-Scheduler/m-p/286645#M212556</guid>
      <dc:creator>ssk26</dc:creator>
      <dc:date>2019-12-31T09:09:09Z</dc:date>
    </item>
    <item>
      <title>Re: Spark jobs are stuck under YARN Fair Scheduler</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Spark-jobs-are-stuck-under-YARN-Fair-Scheduler/m-p/286780#M212649</link>
      <description>&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/72245"&gt;@ssk26&lt;/a&gt; ,&lt;BR /&gt;&lt;BR /&gt;Can you please check the value of below settings?&lt;BR /&gt;&lt;BR /&gt;yarn.app.mapreduce.am.resource.mb&lt;BR /&gt;yarn.app.mapreduce.am.resource.cpu-vcores&lt;BR /&gt;&lt;BR /&gt;Cheers&lt;BR /&gt;Eric</description>
      <pubDate>Thu, 02 Jan 2020 22:01:09 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Spark-jobs-are-stuck-under-YARN-Fair-Scheduler/m-p/286780#M212649</guid>
      <dc:creator>EricL</dc:creator>
      <dc:date>2020-01-02T22:01:09Z</dc:date>
    </item>
    <item>
      <title>Re: Spark jobs are stuck under YARN Fair Scheduler</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Spark-jobs-are-stuck-under-YARN-Fair-Scheduler/m-p/286785#M212654</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/10115"&gt;@EricL&lt;/a&gt;,&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks for your inputs.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The value of yarn.app.mapreduce.am.resource.mb is set to 1024 in "mapred-site.xml" file.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I was not able to find the value of "&lt;SPAN&gt;yarn.app.mapreduce.am.resource.cpu-vcores" in any of the XML files (i.e core-site.xml, mapred-site.xml, yarn-site.xml, capacity-scheduler.xml etc..)&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;lt;property&amp;gt;&lt;BR /&gt;&amp;lt;name&amp;gt;yarn.app.mapreduce.am.resource.mb&amp;lt;/name&amp;gt;&lt;BR /&gt;&amp;lt;value&amp;gt;1024&amp;lt;/value&amp;gt;&lt;BR /&gt;&amp;lt;/property&amp;gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Here is the progress that I have made so far:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;- After setting the Yarn fair scheduler, I did set the Spark program to use Fair scheduling pool also (from default Spark fair scheduler XML template)&amp;nbsp;&lt;/P&gt;&lt;P&gt;- The minimum and maximum allocation (in MB) for Fair Scheduler in Yarn is set to 1024 MB and 3072 MB respectively.&amp;nbsp;&lt;/P&gt;&lt;P&gt;- After running a single Spark job [with both Driver and Executor memory set to 512MB], I was able to verify that the job is running. But, it was consuming the entire 3 GB memory.&amp;nbsp;&amp;nbsp;&lt;/P&gt;&lt;P&gt;- So, the next Spark job is not running at all, as it is waiting for the memory.&amp;nbsp;&lt;/P&gt;&lt;P&gt;- But, if I revert back the YARN scheduling to "Capacity Scheduler", then with the same memory settings, both the jobs are running fine without any issues.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;So, what additional memory related parameters need to be set in Fair Scheduling for the jobs to run properly?&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Please help me in fixing this issue.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 03 Jan 2020 05:20:21 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Spark-jobs-are-stuck-under-YARN-Fair-Scheduler/m-p/286785#M212654</guid>
      <dc:creator>ssk26</dc:creator>
      <dc:date>2020-01-03T05:20:21Z</dc:date>
    </item>
    <item>
      <title>Re: Spark jobs are stuck under YARN Fair Scheduler</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Spark-jobs-are-stuck-under-YARN-Fair-Scheduler/m-p/286867#M212716</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/10115"&gt;@EricL&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;This is just a gentle reminder.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Can you please help me in fixing this issue?&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks and Regards,&lt;/P&gt;&lt;P&gt;Sudhindra&lt;/P&gt;</description>
      <pubDate>Sun, 05 Jan 2020 17:26:21 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Spark-jobs-are-stuck-under-YARN-Fair-Scheduler/m-p/286867#M212716</guid>
      <dc:creator>ssk26</dc:creator>
      <dc:date>2020-01-05T17:26:21Z</dc:date>
    </item>
    <item>
      <title>Re: Spark jobs are stuck under YARN Fair Scheduler</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Spark-jobs-are-stuck-under-YARN-Fair-Scheduler/m-p/287094#M212855</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/10115"&gt;@EricL&lt;/a&gt;&amp;nbsp;,&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I am still facing the same issue when I use YARN fair scheduler to run the Spark jobs.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;With the same memory configuration, the Spark jobs are running fine when YARN Capacity Scheduler is used.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Can you please help me in fixing this issue?&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks and Regards,&lt;/P&gt;&lt;P&gt;Sudhindra&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 08 Jan 2020 07:05:21 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Spark-jobs-are-stuck-under-YARN-Fair-Scheduler/m-p/287094#M212855</guid>
      <dc:creator>ssk26</dc:creator>
      <dc:date>2020-01-08T07:05:21Z</dc:date>
    </item>
    <item>
      <title>Re: Spark jobs are stuck under YARN Fair Scheduler</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Spark-jobs-are-stuck-under-YARN-Fair-Scheduler/m-p/287141#M212889</link>
      <description>&lt;P&gt;Hello,&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;In your screenshot the&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;lt;queueMaxResourcesDefault&amp;gt;&lt;/P&gt;&lt;P&gt;Is set to 8192 mb, 0vcore&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;And your job requires at least 1vcore as seen in the Diagnostics section.&lt;/P&gt;&lt;P&gt;Please try increasing the vcore size in&amp;nbsp;&amp;lt;queueMaxResourcesDefault&amp;gt; and try to run the job again.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Best,&lt;BR /&gt;Lyubomir&lt;/P&gt;</description>
      <pubDate>Wed, 08 Jan 2020 14:43:57 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Spark-jobs-are-stuck-under-YARN-Fair-Scheduler/m-p/287141#M212889</guid>
      <dc:creator>lyubomirangelo</dc:creator>
      <dc:date>2020-01-08T14:43:57Z</dc:date>
    </item>
    <item>
      <title>Re: Spark jobs are stuck under YARN Fair Scheduler</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Spark-jobs-are-stuck-under-YARN-Fair-Scheduler/m-p/287253#M212936</link>
      <description>&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/72245"&gt;@ssk26&lt;/a&gt;,&lt;BR /&gt;&lt;BR /&gt;Have you tried to increase yarn.app.mapreduce.am.resource.mb? Default value is 1GB, and I see from your screenshot that requested 1GB exceeds AM's limit.&lt;BR /&gt;&lt;BR /&gt;Cheers&lt;BR /&gt;Eric</description>
      <pubDate>Thu, 09 Jan 2020 11:00:38 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Spark-jobs-are-stuck-under-YARN-Fair-Scheduler/m-p/287253#M212936</guid>
      <dc:creator>EricL</dc:creator>
      <dc:date>2020-01-09T11:00:38Z</dc:date>
    </item>
    <item>
      <title>Re: Spark jobs are stuck under YARN Fair Scheduler</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Spark-jobs-are-stuck-under-YARN-Fair-Scheduler/m-p/287315#M212982</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/10115"&gt;@EricL&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I did change the parameter "&lt;SPAN&gt;yarn.app.mapreduce.am.resource.mb" to 2 GB (2048 MB).&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Although the second Spark job is now running fine under "Fair Scheduler" configuration, the tasks under the second Spark job are not getting the required number of resources at all.&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&lt;STRONG&gt;[Stage 0:&amp;gt; (0 + 0) / 1]20/01/09 22:58:01 WARN YarnScheduler: &lt;FONT color="#FF6600"&gt;Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources&lt;/FONT&gt;&lt;/STRONG&gt;&lt;BR /&gt;&lt;FONT color="#FF6600"&gt;&lt;STRONG&gt;20/01/09 22:58:16 WARN YarnScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources&lt;/STRONG&gt;&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT color="#FF6600"&gt;&lt;STRONG&gt;20/01/09 22:58:31 WARN YarnScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources&lt;/STRONG&gt;&lt;/FONT&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;FONT color="#000000"&gt;Here are the important information about the cluster:&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT color="#000000"&gt;1. Number of nodes in the Cluster: 2&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT color="#000000"&gt;2. Total amount of memory of Cluster: 15.28 GB&amp;nbsp;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT color="#000000"&gt;&amp;nbsp; &amp;nbsp; (&lt;SPAN&gt;yarn.nodemanager.resource.memory-mb = 7821 MB&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT color="#000000"&gt;&amp;nbsp; &amp;nbsp; &lt;SPAN&gt;yarn.app.mapreduce.am.resource.mb = 2048 MB&amp;nbsp;&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT color="#000000"&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; yarn.scheduler.minimum-allocation-mb = 1024 MB&amp;nbsp;&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT color="#000000"&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; yarn.scheduler.maximum-allocation-mb = 3072 MB)&amp;nbsp;&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT color="#000000"&gt;3. Number of executors set through the program: 5 (spark.num.executors)&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT color="#000000"&gt;4. Number of cores set through the program: 3&amp;nbsp; (spark.executor.cores)&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT color="#000000"&gt;5. Spark Driver Memory and Spark Executor Memory:&amp;nbsp; 2g each&amp;nbsp;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;FONT color="#000000"&gt;Please help me in understanding what else is going wrong.&amp;nbsp;&amp;nbsp;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;FONT color="#000000"&gt;&lt;STRONG&gt;Note:&amp;nbsp;&lt;/STRONG&gt; With the same set of parameters (along with&amp;nbsp;&lt;SPAN&gt;yarn.app.mapreduce.am.resource.mb of 1024 MB), the Spark job run fine when YARN Capacity Scheduler is set. However, it doesn't run when YARN Fair Scheduler is set. So, I want to understand what's going wrong only with Fair Scheduler.&amp;nbsp;&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 10 Jan 2020 04:44:58 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Spark-jobs-are-stuck-under-YARN-Fair-Scheduler/m-p/287315#M212982</guid>
      <dc:creator>ssk26</dc:creator>
      <dc:date>2020-01-10T04:44:58Z</dc:date>
    </item>
    <item>
      <title>Re: Spark jobs are stuck under YARN Fair Scheduler</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Spark-jobs-are-stuck-under-YARN-Fair-Scheduler/m-p/287316#M212983</link>
      <description>Have you tried &lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/40636"&gt;@lyubomirangelo&lt;/a&gt; 's suggestion? He noticed that you have 0vcore configured, not sure if that has any negative impacts.&lt;BR /&gt;&lt;BR /&gt;And you only have 2 nodes in the cluster, meaning only one NodeManager? One master one worker node?</description>
      <pubDate>Fri, 10 Jan 2020 04:56:04 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Spark-jobs-are-stuck-under-YARN-Fair-Scheduler/m-p/287316#M212983</guid>
      <dc:creator>EricL</dc:creator>
      <dc:date>2020-01-10T04:56:04Z</dc:date>
    </item>
    <item>
      <title>Re: Spark jobs are stuck under YARN Fair Scheduler</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Spark-jobs-are-stuck-under-YARN-Fair-Scheduler/m-p/287349#M213001</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/72245"&gt;@ssk26&lt;/a&gt;&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;From my perspictive you are limiting your default queue to use at minimum 1024MB 0vCores and at maximum 8196MB 0vCores. In both cases no cores are set - when you try to run a job it requires to run with 1024MB memory and 1vCores - it then fails to allocate the 1vCore due to 0vCore min/max restriction and it sends 'exceeds maximum AM resources allowed'&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;That's why I think the issue is with the core utilization and not with memory.&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;HTH&lt;BR /&gt;&lt;BR /&gt;Best,&lt;BR /&gt;Lyubomir&lt;/P&gt;</description>
      <pubDate>Fri, 10 Jan 2020 11:54:13 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Spark-jobs-are-stuck-under-YARN-Fair-Scheduler/m-p/287349#M213001</guid>
      <dc:creator>lyubomirangelo</dc:creator>
      <dc:date>2020-01-10T11:54:13Z</dc:date>
    </item>
    <item>
      <title>Re: Spark jobs are stuck under YARN Fair Scheduler</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Spark-jobs-are-stuck-under-YARN-Fair-Scheduler/m-p/287838#M213266</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/40636"&gt;@lyubomirangelo&lt;/a&gt;&amp;nbsp;and&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/10115"&gt;@EricL&lt;/a&gt;&amp;nbsp;,&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Sorry for the delayed response.&amp;nbsp; Thanks for your inputs.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have already changed the number of vcores. But, I am still facing the same issue.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;In the meantime, I was able to execute the jobs with YARN Capacity scheduler (with the same memory configuration). So, I am not sure what's wrong with the settings of YARN Fair Scheduler.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Please suggest if any specific settings are required for YARN Fair Scheduler.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Also, I am still using default queue.&amp;nbsp; I haven't set a separate Queue for handling fair scheduler.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks and Regards,&amp;nbsp;&lt;/P&gt;&lt;P&gt;Sudhindra&lt;/P&gt;</description>
      <pubDate>Fri, 17 Jan 2020 06:26:58 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Spark-jobs-are-stuck-under-YARN-Fair-Scheduler/m-p/287838#M213266</guid>
      <dc:creator>ssk26</dc:creator>
      <dc:date>2020-01-17T06:26:58Z</dc:date>
    </item>
    <item>
      <title>Re: Spark jobs are stuck under YARN Fair Scheduler</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Spark-jobs-are-stuck-under-YARN-Fair-Scheduler/m-p/287851#M213278</link>
      <description>&lt;P&gt;Hi Sudhinra,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thank you for the update.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Can you share the SparkConf you use for your applications;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The following settings should work for small resource apps (Note dynamic allocation is disabled):&lt;/P&gt;&lt;PRE&gt;&lt;SPAN class="pln"&gt;conf &lt;/SPAN&gt;&lt;SPAN class="pun"&gt;=&lt;/SPAN&gt; &lt;SPAN class="pun"&gt;(&lt;/SPAN&gt;&lt;SPAN class="typ"&gt;SparkConf&lt;/SPAN&gt;&lt;SPAN class="pun"&gt;().&lt;/SPAN&gt;&lt;SPAN class="pln"&gt;setAppName&lt;/SPAN&gt;&lt;SPAN class="pun"&gt;(&lt;/SPAN&gt;&lt;SPAN class="str"&gt;"simple"&lt;/SPAN&gt;&lt;SPAN class="pun"&gt;)&lt;/SPAN&gt;
        &lt;SPAN class="pun"&gt;.&lt;/SPAN&gt;&lt;SPAN class="pln"&gt;set&lt;/SPAN&gt;&lt;SPAN class="pun"&gt;(&lt;/SPAN&gt;&lt;SPAN class="str"&gt;"spark.shuffle.service.enabled"&lt;/SPAN&gt;&lt;SPAN class="pun"&gt;,&lt;/SPAN&gt; &lt;SPAN class="str"&gt;"false"&lt;/SPAN&gt;&lt;SPAN class="pun"&gt;)&lt;/SPAN&gt;
        &lt;SPAN class="pun"&gt;.&lt;/SPAN&gt;&lt;SPAN class="pln"&gt;set&lt;/SPAN&gt;&lt;SPAN class="pun"&gt;(&lt;/SPAN&gt;&lt;SPAN class="str"&gt;"spark.dynamicAllocation.enabled"&lt;/SPAN&gt;&lt;SPAN class="pun"&gt;,&lt;/SPAN&gt; &lt;SPAN class="str"&gt;"false"&lt;/SPAN&gt;&lt;SPAN class="pun"&gt;)&lt;/SPAN&gt;
        &lt;SPAN class="pun"&gt;.&lt;/SPAN&gt;&lt;SPAN class="pln"&gt;set&lt;/SPAN&gt;&lt;SPAN class="pun"&gt;(&lt;/SPAN&gt;&lt;SPAN class="str"&gt;"spark.cores.max"&lt;/SPAN&gt;&lt;SPAN class="pun"&gt;,&lt;/SPAN&gt; &lt;SPAN class="str"&gt;"1"&lt;/SPAN&gt;&lt;SPAN class="pun"&gt;)&lt;/SPAN&gt;
        &lt;SPAN class="pun"&gt;.&lt;/SPAN&gt;&lt;SPAN class="pln"&gt;set&lt;/SPAN&gt;&lt;SPAN class="pun"&gt;(&lt;/SPAN&gt;&lt;SPAN class="str"&gt;"spark.executor.instances"&lt;/SPAN&gt;&lt;SPAN class="pun"&gt;,&lt;/SPAN&gt;&lt;SPAN class="str"&gt;"2"&lt;/SPAN&gt;&lt;SPAN class="pun"&gt;)&lt;/SPAN&gt;
        &lt;SPAN class="pun"&gt;.&lt;/SPAN&gt;&lt;SPAN class="pln"&gt;set&lt;/SPAN&gt;&lt;SPAN class="pun"&gt;(&lt;/SPAN&gt;&lt;SPAN class="str"&gt;"spark.executor.memory"&lt;/SPAN&gt;&lt;SPAN class="pun"&gt;,&lt;/SPAN&gt;&lt;SPAN class="str"&gt;"200m"&lt;/SPAN&gt;&lt;SPAN class="pun"&gt;)&lt;/SPAN&gt;
        &lt;SPAN class="pun"&gt;.&lt;/SPAN&gt;&lt;SPAN class="pln"&gt;set&lt;/SPAN&gt;&lt;SPAN class="pun"&gt;(&lt;/SPAN&gt;&lt;SPAN class="str"&gt;"spark.executor.cores"&lt;/SPAN&gt;&lt;SPAN class="pun"&gt;,&lt;/SPAN&gt;&lt;SPAN class="str"&gt;"1"&lt;/SPAN&gt;&lt;SPAN class="pun"&gt;)&lt;/SPAN&gt;&lt;/PRE&gt;&lt;P&gt;From:&lt;/P&gt;&lt;P&gt;&lt;A href="https://stackoverflow.com/questions/44581585/warn-cluster-yarnscheduler-initial-job-has-not-accepted-any-resources" target="_blank" rel="noopener"&gt;https://stackoverflow.com/questions/44581585/warn-cluster-yarnscheduler-initial-job-has-not-accepted-any-resources&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;PS: Share the number of cores available on your nodes,&amp;nbsp;&lt;SPAN class="str"&gt;spark.executor.cores should not be higher than number of cores available on each node. Also, are you running spark in cluster or client mode?&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;HTH&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Best,&lt;BR /&gt;Lyubomir&lt;/P&gt;</description>
      <pubDate>Fri, 17 Jan 2020 10:05:13 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Spark-jobs-are-stuck-under-YARN-Fair-Scheduler/m-p/287851#M213278</guid>
      <dc:creator>lyubomirangelo</dc:creator>
      <dc:date>2020-01-17T10:05:13Z</dc:date>
    </item>
    <item>
      <title>Re: Spark jobs are stuck under YARN Fair Scheduler</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Spark-jobs-are-stuck-under-YARN-Fair-Scheduler/m-p/287856#M213283</link>
      <description>&lt;P&gt;Hi Sudhnidra,&lt;BR /&gt;&lt;BR /&gt;Please take a look at:&lt;BR /&gt;&lt;A href="https://blog.cloudera.com/yarn-fairscheduler-preemption-deep-dive/" target="_blank"&gt;https://blog.cloudera.com/yarn-fairscheduler-preemption-deep-dive/&lt;/A&gt;&lt;BR /&gt;&lt;A href="https://blog.cloudera.com/untangling-apache-hadoop-yarn-part-3-scheduler-concepts/" target="_blank"&gt;https://blog.cloudera.com/untangling-apache-hadoop-yarn-part-3-scheduler-concepts/&lt;/A&gt;&lt;BR /&gt;&lt;A href="https://clouderatemp.wpengine.com/blog/2016/06/untangling-apache-hadoop-yarn-part-4-fair-scheduler-queue-basics/" target="_blank"&gt;https://clouderatemp.wpengine.com/blog/2016/06/untangling-apache-hadoop-yarn-part-4-fair-scheduler-queue-basics/&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;What type of FairScheduler are you using:&lt;/P&gt;&lt;P&gt;Steady FairShare&lt;BR /&gt;Instantaneous FairShare&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;What is the weight of the default queue you are submitting your apps to?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Best,&lt;BR /&gt;Lyubomir&lt;/P&gt;&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 17 Jan 2020 12:36:17 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Spark-jobs-are-stuck-under-YARN-Fair-Scheduler/m-p/287856#M213283</guid>
      <dc:creator>lyubomirangelo</dc:creator>
      <dc:date>2020-01-17T12:36:17Z</dc:date>
    </item>
  </channel>
</rss>

