<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Spark jobs are stuck under YARN Fair Scheduler in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/Spark-jobs-are-stuck-under-YARN-Fair-Scheduler/m-p/287349#M213001</link>
    <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/72245"&gt;@ssk26&lt;/a&gt;&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;From my perspictive you are limiting your default queue to use at minimum 1024MB 0vCores and at maximum 8196MB 0vCores. In both cases no cores are set - when you try to run a job it requires to run with 1024MB memory and 1vCores - it then fails to allocate the 1vCore due to 0vCore min/max restriction and it sends 'exceeds maximum AM resources allowed'&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;That's why I think the issue is with the core utilization and not with memory.&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;HTH&lt;BR /&gt;&lt;BR /&gt;Best,&lt;BR /&gt;Lyubomir&lt;/P&gt;</description>
    <pubDate>Fri, 10 Jan 2020 11:54:13 GMT</pubDate>
    <dc:creator>lyubomirangelo</dc:creator>
    <dc:date>2020-01-10T11:54:13Z</dc:date>
  </channel>
</rss>

