<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Yarn Application failed on out of memory in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/Yarn-Application-failed-on-out-of-memory/m-p/50397#M35058</link>
    <description>&lt;P&gt;The map container memory was set to 4 GB. &amp;nbsp;Presumably the heap value was set to 3 GB (newer versions have a percentage and auto set the heap size of the container and the default percentage is 80%; 3/4 is 75%). &amp;nbsp;The 6 GB comes from virtual memory, which I recommend just disabling as it can cause weird OOM issues. &amp;nbsp;The default virtual memory ration is 2.1 which doesn't come out to 6 from 4. &amp;nbsp;The log even states that the latter is the virtual memory size.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;yarn.nodemanager.vmem-check-enabled = false to disable.&lt;/P&gt;</description>
    <pubDate>Sat, 04 Feb 2017 05:30:43 GMT</pubDate>
    <dc:creator>mbigelow</dc:creator>
    <dc:date>2017-02-04T05:30:43Z</dc:date>
    <item>
      <title>Yarn Application failed on out of memory</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Yarn-Application-failed-on-out-of-memory/m-p/50122#M35052</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have a mapper reduce job failed on out of memory.&lt;/P&gt;&lt;P&gt;Log:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Application application_1484466365663_87038 failed 2 times due to AM Container for appattempt_1484466365663_87038_000002 exited with exitCode: -104&lt;BR /&gt;Diagnostics: Container [pid=7448,containerID=container_e29_1484466365663_87038_02_000001] is running beyond physical memory limits. Current usage: 3.0 GB of 3 GB physical memory used; 6.6 GB of 6.3 GB virtual memory used. Killing container.&lt;BR /&gt;Dump of the process-tree for container_e29_1484466365663_87038_02_000001 :&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;When i'm checking the memory configured for map task and for Application master in cloudera manager it's 2 GB.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Checked the job configuration in YARN and see it's 2 GB.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;mapreduce.map.memory.mb = 2GB&lt;/P&gt;&lt;P&gt;I have 2 question:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;1- How i &amp;nbsp;know if this container is the AM container or the mapper container, does the above error indicated the AM memory exceeded?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;2- Why it's alerting on 3GB while all my configuration is 2 GB.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The solution is clear for me that i need to increase the memory.&lt;/P&gt;</description>
      <pubDate>Fri, 16 Sep 2022 10:58:49 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Yarn-Application-failed-on-out-of-memory/m-p/50122#M35052</guid>
      <dc:creator>Fawze</dc:creator>
      <dc:date>2022-09-16T10:58:49Z</dc:date>
    </item>
    <item>
      <title>Re: Yarn Application failed on out of memory</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Yarn-Application-failed-on-out-of-memory/m-p/50128#M35053</link>
      <description>Track down container container_e29_1484466365663_87038_02_000001. It is most likely a reducer. I say that since you said both the Map and AM container size was set to 2 GB. Therefor the Reduce container size must be 3 GB. Well, in theory the user launching it could have overridden any of them.&lt;BR /&gt;&lt;BR /&gt;What is the value of mapreduce.reduce.memory.mb?&lt;BR /&gt;&lt;BR /&gt;Lets try another route as well, in the RM UI, in the job in question, does it have any failed maps or reducers? If yes, drill down to the failed one and view the logs. If not, then the AM container OOM'd.&lt;BR /&gt;&lt;BR /&gt;From my recollection though, that is the line the AM logs concerning one of the containers it is responsible for.&lt;BR /&gt;&lt;BR /&gt;Anyway, the short of it is, either the Reduce container size is 3 GB or the user set their own value to 3 GB as the values in the cluster configs are only the defaults.</description>
      <pubDate>Mon, 30 Jan 2017 05:16:31 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Yarn-Application-failed-on-out-of-memory/m-p/50128#M35053</guid>
      <dc:creator>mbigelow</dc:creator>
      <dc:date>2017-01-30T05:16:31Z</dc:date>
    </item>
    <item>
      <title>Re: Yarn Application failed on out of memory</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Yarn-Application-failed-on-out-of-memory/m-p/50134#M35054</link>
      <description>The job is a cleaner job which running with only 1 mapper, and it's oozie&lt;BR /&gt;launcher, Does the default for the oozie launcher is different from the job?&lt;BR /&gt;&lt;BR /&gt;oozie:launcher:T=java:W=hdfs-cleaner-wf:A=hdfs-cleaner:ID=0568638-160809023957851-oozie-clou-W&lt;BR /&gt;&lt;BR /&gt;More piece of the log:&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Application application_1484466365663_87038 failed 2 times due to AM&lt;BR /&gt;Container for appattempt_1484466365663_87038_000002 exited with exitCode:&lt;BR /&gt;-104&lt;BR /&gt;For more detailed output, check application tracking page:&lt;BR /&gt;&lt;A href="http://avor-mhc102.lpdomain.com:8088/proxy/application_1484466365663_87038/Then" target="_blank"&gt;http://avor-mhc102.lpdomain.com:8088/proxy/application_1484466365663_87038/Then&lt;/A&gt;,&lt;BR /&gt;click on links to logs of each attempt.&lt;BR /&gt;Diagnostics: Container&lt;BR /&gt;[pid=7448,containerID=container_e29_1484466365663_87038_02_000001] is&lt;BR /&gt;running beyond physical memory limits. Current usage: 3.0 GB of 3 GB&lt;BR /&gt;physical memory used; 6.6 GB of 6.3 GB virtual memory used. Killing&lt;BR /&gt;container.&lt;BR /&gt;Dump of the process-tree for container_e29_1484466365663_87038_02_000001 :&lt;BR /&gt;|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)&lt;BR /&gt;SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE&lt;BR /&gt;|- 7448 7446 7448 7448 (bash) 2 2 108650496 304 /bin/bash -c&lt;BR /&gt;/jdk8//bin/java -Dlog4j.configuration=container-log4j.properties&lt;BR /&gt;-Dyarn.app.container.log.dir=//hadoop/log/hadoop-yarn/container/application_1484466365663_87038/container_e29_1484466365663_87038_02_000001&lt;BR /&gt;-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA&lt;BR /&gt;-Djava.net.preferIPv4Stack=true -Xmx825955249&lt;BR /&gt;-Djava.net.preferIPv4Stack=true -Xmx4096m -Xmx4608m -Djava.io.tmpdir=./tmp&lt;BR /&gt;org.apache.hadoop.mapreduce.v2.app.MRAppMaster&lt;BR /&gt;1&amp;gt;/hadoop/log/hadoop-yarn/container/application_1484466365663_87038/container_e29_1484466365663_87038_02_000001/stdout&lt;BR /&gt;2&amp;gt;/hadoop/log/hadoop-yarn/container/application_1484466365663_87038/container_e29_1484466365663_87038_02_000001/stderr&lt;BR /&gt;|- 7613 7448 7448 7448 (java) 22034 2726 6976090112 788011 /jdk8//bin/java&lt;BR /&gt;-Dlog4j.configuration=container-log4j.properties&lt;BR /&gt;-Dyarn.app.container.log.dir=/hadoop/log/hadoop-yarn/container/application_1484466365663_87038/container_e29_1484466365663_87038_02_000001&lt;BR /&gt;-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA&lt;BR /&gt;-Djava.net.preferIPv4Stack=true -Xmx825955249&lt;BR /&gt;-Djava.net.preferIPv4Stack=true -Xmx4096m -Xmx4608m -Djava.io.tmpdir=./tmp&lt;BR /&gt;org.apache.hadoop.mapreduce.v2.app.MRAppMaster&lt;BR /&gt;Container killed on request. Exit code is 143&lt;BR /&gt;Container exited with a non-zero exit code 143&lt;BR /&gt;Failing this attempt. Failing the application.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Maps Total: 1&lt;BR /&gt;&lt;BR /&gt;-&lt;BR /&gt;- Total Tasks: 1&lt;BR /&gt;-&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 30 Jan 2017 08:00:53 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Yarn-Application-failed-on-out-of-memory/m-p/50134#M35054</guid>
      <dc:creator>Fawze</dc:creator>
      <dc:date>2017-01-30T08:00:53Z</dc:date>
    </item>
    <item>
      <title>Re: Yarn Application failed on out of memory</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Yarn-Application-failed-on-out-of-memory/m-p/50135#M35055</link>
      <description>I'm not terrible familiar with Oozie but I believe the launcher was desperately from the actual job.&lt;BR /&gt;&lt;BR /&gt;Also, from the log "-Xmx4096m -Xmx4608m" it is launching with 4 GB container size and the heap is set to 3 GB.&lt;BR /&gt;&lt;BR /&gt;Is it set in the Oozie job settings?</description>
      <pubDate>Mon, 30 Jan 2017 08:07:13 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Yarn-Application-failed-on-out-of-memory/m-p/50135#M35055</guid>
      <dc:creator>mbigelow</dc:creator>
      <dc:date>2017-01-30T08:07:13Z</dc:date>
    </item>
    <item>
      <title>Re: Yarn Application failed on out of memory</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Yarn-Application-failed-on-out-of-memory/m-p/50137#M35056</link>
      <description>Yes, in the oozie it's 4GB, you are right&lt;BR /&gt;&lt;BR /&gt;com.hadoop.platform.cleaner.CleanerJob&lt;BR /&gt;-Xmx4096m&lt;BR /&gt;</description>
      <pubDate>Mon, 30 Jan 2017 08:25:53 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Yarn-Application-failed-on-out-of-memory/m-p/50137#M35056</guid>
      <dc:creator>Fawze</dc:creator>
      <dc:date>2017-01-30T08:25:53Z</dc:date>
    </item>
    <item>
      <title>Re: Yarn Application failed on out of memory</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Yarn-Application-failed-on-out-of-memory/m-p/50396#M35057</link>
      <description>&lt;P&gt;My concern why it's alerting on 3GB of memory and not the mapper memory which is 6GB or the oozie launcher which is 4GB also is it alerting on mapper memory or the application master memory?&lt;/P&gt;</description>
      <pubDate>Sat, 04 Feb 2017 05:21:24 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Yarn-Application-failed-on-out-of-memory/m-p/50396#M35057</guid>
      <dc:creator>Fawze</dc:creator>
      <dc:date>2017-02-04T05:21:24Z</dc:date>
    </item>
    <item>
      <title>Re: Yarn Application failed on out of memory</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Yarn-Application-failed-on-out-of-memory/m-p/50397#M35058</link>
      <description>&lt;P&gt;The map container memory was set to 4 GB. &amp;nbsp;Presumably the heap value was set to 3 GB (newer versions have a percentage and auto set the heap size of the container and the default percentage is 80%; 3/4 is 75%). &amp;nbsp;The 6 GB comes from virtual memory, which I recommend just disabling as it can cause weird OOM issues. &amp;nbsp;The default virtual memory ration is 2.1 which doesn't come out to 6 from 4. &amp;nbsp;The log even states that the latter is the virtual memory size.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;yarn.nodemanager.vmem-check-enabled = false to disable.&lt;/P&gt;</description>
      <pubDate>Sat, 04 Feb 2017 05:30:43 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Yarn-Application-failed-on-out-of-memory/m-p/50397#M35058</guid>
      <dc:creator>mbigelow</dc:creator>
      <dc:date>2017-02-04T05:30:43Z</dc:date>
    </item>
    <item>
      <title>Re: Yarn Application failed on out of memory</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Yarn-Application-failed-on-out-of-memory/m-p/86050#M35059</link>
      <description>&lt;P&gt;How I can disable `yarn.nodemanager.vmem-check-enabled` I try to add to `&lt;SPAN&gt;NodeManager Advanced Configuration Snippet (Safety Valve) for yarn-site.xml&lt;/SPAN&gt;` but I don't see it in the yarn-site.xml on the nodes.&lt;/P&gt;</description>
      <pubDate>Thu, 07 Feb 2019 08:56:51 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Yarn-Application-failed-on-out-of-memory/m-p/86050#M35059</guid>
      <dc:creator>Izek</dc:creator>
      <dc:date>2019-02-07T08:56:51Z</dc:date>
    </item>
    <item>
      <title>Re: Yarn Application failed on out of memory</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Yarn-Application-failed-on-out-of-memory/m-p/87197#M35060</link>
      <description>vmem checks have been disabled in CDH almost since their introduction. The vmem check is not stable and highly dependent on Linux version and distro. If you run CDH you are already running with it disabled.&lt;BR /&gt;&lt;BR /&gt;Wilfred</description>
      <pubDate>Tue, 05 Mar 2019 02:53:04 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Yarn-Application-failed-on-out-of-memory/m-p/87197#M35060</guid>
      <dc:creator>Wilfred</dc:creator>
      <dc:date>2019-03-05T02:53:04Z</dc:date>
    </item>
  </channel>
</rss>

