<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Jobs fail in Yarn with out of Java heap memory error in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/Jobs-fail-in-Yarn-with-out-of-Java-heap-memory-error/m-p/18906#M41881</link>
    <description>&amp;lt;name&amp;gt;mapreduce.reduce.java.opts&amp;lt;/name&amp;gt;&lt;BR /&gt;&amp;lt;value&amp;gt;-Djava.net.preferIPv4Stack=true -Xmx1280m -Xmx825955249&amp;lt;/value&amp;gt;&lt;BR /&gt;&lt;BR /&gt;Limits the heap to ~825MB. Most JVMs resolve duplicate args by picking the last one. So this is nowhere close to the 3GB that you intended. You should find out where you set this in CM and change it.&lt;BR /&gt;&lt;BR /&gt;Do that before you play with parallelcopies. But to answer your questions, yes, it'll increase CPU, memory &amp;amp; network usage. And it could lead to more disk spills and slow down your job.</description>
    <pubDate>Wed, 17 Sep 2014 17:12:47 GMT</pubDate>
    <dc:creator>bcwalrus</dc:creator>
    <dc:date>2014-09-17T17:12:47Z</dc:date>
    <item>
      <title>Jobs fail in Yarn with out of Java heap memory error</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Jobs-fail-in-Yarn-with-out-of-Java-heap-memory-error/m-p/18772#M41868</link>
      <description>&lt;P&gt;We are running Yarn on CDH 5.1 with 14 nodes using&amp;nbsp;6 GB of memory. I understand this is not a log of memory, but it is all we could put together. Most jobs complete without error, but a few of the larger MapReduce jobs fail with an out of Java heap memory error. The jobs fail on a Reduce task that either sorts or groups data. We recently upgraded to CDH 5.1 from CDH 4.7 and ALL of these jobs succeeded on MapReduce v1. Looking in the logs I see that the Application has retired a few times before failing.&amp;nbsp;Can you see anything wrong with the way the resources are configured?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;TABLE&gt;&lt;TBODY&gt;&lt;TR&gt;&lt;TD&gt;&lt;P&gt;Java Heap Size of NodeManager in Bytes&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;1 GB&lt;/P&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;&lt;P&gt;yarn.nodemanager.resource.memory-mb&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;6 GB&lt;/P&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;&lt;P&gt;yarn.scheduler.minimum-allocation-mb&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;1 GB&lt;/P&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;&lt;P&gt;yarn.scheduler.maximum-allocation-mb&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;6 GB&lt;/P&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;&lt;P&gt;yarn.app.mapreduce.am.resource.mb&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;1.5 GB&lt;/P&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;&lt;P&gt;yarn.nodemanager.container-manager.thread-count&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;20&lt;/P&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;&lt;P&gt;yarn.resourcemanager.resource-tracker.client.thread-count&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;20&lt;/P&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;&lt;P&gt;mapreduce.map.memory.mb&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;1.5 GB&lt;/P&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;&lt;P&gt;mapreduce.reduce.memory.mb&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;3 GB&lt;/P&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;&lt;P&gt;mapreduce.map.java.opts&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;"-Djava.net.preferIPv4Stack=true -Xmx 1228m";&lt;/P&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;&lt;P&gt;mapreduce.reduce.java.opts&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;"-Djava.net.preferIPv4Stack=true -Xmx2457m";&lt;/P&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;&lt;P&gt;mapreduce.task.io.sort.factor&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;5&lt;/P&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;&lt;P&gt;mapreduce.task.io.sort.mb&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;512 MB&lt;/P&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;&lt;P&gt;mapreduce.job.reduces&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;2&lt;/P&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;&lt;P&gt;mapreduce.reduce.shuffle.parallelcopies&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;4&lt;/P&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;/TBODY&gt;&lt;/TABLE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;One thing that might help, Yarn&amp;nbsp;runs 4 containers per node, can this be reduced?&lt;/P&gt;</description>
      <pubDate>Fri, 16 Sep 2022 09:07:42 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Jobs-fail-in-Yarn-with-out-of-Java-heap-memory-error/m-p/18772#M41868</guid>
      <dc:creator>IT.Services</dc:creator>
      <dc:date>2022-09-16T09:07:42Z</dc:date>
    </item>
    <item>
      <title>Re: Jobs fail in Yarn with out of Java heap memory error</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Jobs-fail-in-Yarn-with-out-of-Java-heap-memory-error/m-p/18774#M41869</link>
      <description>&lt;P&gt;What are your MR1 settings? Do reducers used to get -Xmx2457m on MR1?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Also, the AM memory at 1.5GB is a bit high. You could probably cut that to 1GB.&lt;/P&gt;</description>
      <pubDate>Mon, 15 Sep 2014 16:33:12 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Jobs-fail-in-Yarn-with-out-of-Java-heap-memory-error/m-p/18774#M41869</guid>
      <dc:creator>bcwalrus</dc:creator>
      <dc:date>2014-09-15T16:33:12Z</dc:date>
    </item>
    <item>
      <title>Re: Jobs fail in Yarn with out of Java heap memory error</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Jobs-fail-in-Yarn-with-out-of-Java-heap-memory-error/m-p/18786#M41870</link>
      <description>&lt;P&gt;Thanks bcwalrus, very good question:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;In MRv1, we configured the&amp;nbsp;&lt;SPAN&gt;Java Heap Size of TaskTracker in Bytes with: 600 MB. Do you think I've set this too high in MRv2?&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;I'll cut the AM memory down to 1 GB, that is good advice. That will save me some memory on the node.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Kevin&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 15 Sep 2014 17:30:45 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Jobs-fail-in-Yarn-with-out-of-Java-heap-memory-error/m-p/18786#M41870</guid>
      <dc:creator>IT.Services</dc:creator>
      <dc:date>2014-09-15T17:30:45Z</dc:date>
    </item>
    <item>
      <title>Re: Jobs fail in Yarn with out of Java heap memory error</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Jobs-fail-in-Yarn-with-out-of-Java-heap-memory-error/m-p/18806#M41871</link>
      <description>I'm not asking about the heap of the TT process. I'm asking about the -Xmx of the reducers of this particular job (which used to work in MR1 and is failing in MR2).&lt;BR /&gt;&lt;BR /&gt;You said that the reducers are failing due to OOME. They're getting 2457MB in MR2. What did they get in MR1?</description>
      <pubDate>Tue, 16 Sep 2014 00:25:50 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Jobs-fail-in-Yarn-with-out-of-Java-heap-memory-error/m-p/18806#M41871</guid>
      <dc:creator>bcwalrus</dc:creator>
      <dc:date>2014-09-16T00:25:50Z</dc:date>
    </item>
    <item>
      <title>Re: Jobs fail in Yarn with out of Java heap memory error</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Jobs-fail-in-Yarn-with-out-of-Java-heap-memory-error/m-p/18830#M41872</link>
      <description>I don't think we ever changed the -Xmx on the reducers in MR1, this would have remained the default. Do you know what the default is for MR1?</description>
      <pubDate>Tue, 16 Sep 2014 15:32:30 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Jobs-fail-in-Yarn-with-out-of-Java-heap-memory-error/m-p/18830#M41872</guid>
      <dc:creator>IT.Services</dc:creator>
      <dc:date>2014-09-16T15:32:30Z</dc:date>
    </item>
    <item>
      <title>Re: Jobs fail in Yarn with out of Java heap memory error</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Jobs-fail-in-Yarn-with-out-of-Java-heap-memory-error/m-p/18834#M41873</link>
      <description>The default in MR1 is unlimited, for both mapred.cluster.max.reduce.memory.mb and mapred.job.reduce.memory.mb. What did you set for mapred.child.java.opts (MR1)? Do you have the job counters from a big MR1 job? It'll tell you the average memory usage across the reducers, which will give you a good idea on what to set for MR2.</description>
      <pubDate>Tue, 16 Sep 2014 16:11:08 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Jobs-fail-in-Yarn-with-out-of-Java-heap-memory-error/m-p/18834#M41873</guid>
      <dc:creator>bcwalrus</dc:creator>
      <dc:date>2014-09-16T16:11:08Z</dc:date>
    </item>
    <item>
      <title>Re: Jobs fail in Yarn with out of Java heap memory error</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Jobs-fail-in-Yarn-with-out-of-Java-heap-memory-error/m-p/18844#M41874</link>
      <description>Thanks for your help with this problem, I didn't know the default was unlimited. The max number of reducers for each TT was set at 2. I don't have the job counters from a big MR1 job, but I might be able to look them up. Where would I find them?</description>
      <pubDate>Tue, 16 Sep 2014 16:36:55 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Jobs-fail-in-Yarn-with-out-of-Java-heap-memory-error/m-p/18844#M41874</guid>
      <dc:creator>IT.Services</dc:creator>
      <dc:date>2014-09-16T16:36:55Z</dc:date>
    </item>
    <item>
      <title>Re: Jobs fail in Yarn with out of Java heap memory error</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Jobs-fail-in-Yarn-with-out-of-Java-heap-memory-error/m-p/18846#M41875</link>
      <description>&lt;P&gt;From the Yarn logs I can see that Yarn believes that a huge amount of virtual memory is available before the job is killed, why is it using so much Virtual memory? Where is this set?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;2014-09-16 10:18:30,803 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 51870 for container-id container_1410882800578_0001_01_000001: 797.0 MB of 2.5 GB physical memory used; 1.8 GB of &lt;SPAN style="text-decoration: underline;"&gt;&lt;STRONG&gt;5.3 GB&lt;/STRONG&gt;&lt;/SPAN&gt; virtual memory used
2014-09-16 10:18:33,829 INFO &lt;BR /&gt;...&lt;BR /&gt;org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Stopping container with container Id: container_1410882800578_0005_01_000048
2014-09-16 10:18:34,431 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=admin	IP=192.168.210.251	OPERATION=Stop Container Request	TARGET=ContainerManageImpl	RESULT=SUCCESS	APPID=application_1410882800578_0005	CONTAINERID=container_1410882800578_0005_01_000048
2014-09-16 10:18:34,432 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1410882800578_0005_01_000048 transitioned from RUNNING to KILLING
2014-09-16 10:18:34,433 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up container container_1410882800578_0005_01_000048
2014-09-16 10:18:34,462 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code from container container_1410882800578_0005_01_000048 is : 143
2014-09-16 10:18:34,550 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1410882800578_0005_01_000048 transitioned from KILLING to CONTAINER_CLEANEDUP_AFTER_KILL
2014-09-16 10:18:34,553 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting absolute path : /space1/yarn/nm/usercache/admin/appcache/application_1410882800578_0005/container_1410882800578_0005_01_000048
2014-09-16 10:18:34,556 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting absolute path : /space2/yarn/nm/usercache/admin/appcache/application_1410882800578_0005/container_1410882800578_0005_01_000048
2014-09-16 10:18:34,558 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=admin	OPERATION=Container Finished - Killed	TARGET=ContainerImpl	RESULT=SUCCESS	APPID=application_1410882800578_0005	CONTAINERID=container_1410882800578_0005_01_000048&lt;/PRE&gt;</description>
      <pubDate>Tue, 16 Sep 2014 17:23:38 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Jobs-fail-in-Yarn-with-out-of-Java-heap-memory-error/m-p/18846#M41875</guid>
      <dc:creator>IT.Services</dc:creator>
      <dc:date>2014-09-16T17:23:38Z</dc:date>
    </item>
    <item>
      <title>Re: Jobs fail in Yarn with out of Java heap memory error</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Jobs-fail-in-Yarn-with-out-of-Java-heap-memory-error/m-p/18848#M41876</link>
      <description>Virtual memory checking is pointless. Please make sure that `yarn.nodemanager.vmem-check-enabled' is turned off. The CDH default is off already.&lt;BR /&gt;&lt;BR /&gt;That shouldn't matter though. You said that the job died due to OOME. It didn't die because it got killed by NM.</description>
      <pubDate>Tue, 16 Sep 2014 17:31:47 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Jobs-fail-in-Yarn-with-out-of-Java-heap-memory-error/m-p/18848#M41876</guid>
      <dc:creator>bcwalrus</dc:creator>
      <dc:date>2014-09-16T17:31:47Z</dc:date>
    </item>
    <item>
      <title>Re: Jobs fail in Yarn with out of Java heap memory error</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Jobs-fail-in-Yarn-with-out-of-Java-heap-memory-error/m-p/18850#M41877</link>
      <description>Thanks bcwalrus, what if I increased the mapreduce.task.io.sort.factor, which is currently set to 5?&lt;BR /&gt;&lt;BR /&gt;Also, do you know if it would be helpful to increase the mapreduce.reduce.java.opts.max.heap from the current setting of 787.69 MiB? Or is this not helpful?</description>
      <pubDate>Tue, 16 Sep 2014 17:38:04 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Jobs-fail-in-Yarn-with-out-of-Java-heap-memory-error/m-p/18850#M41877</guid>
      <dc:creator>IT.Services</dc:creator>
      <dc:date>2014-09-16T17:38:04Z</dc:date>
    </item>
    <item>
      <title>Re: Jobs fail in Yarn with out of Java heap memory error</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Jobs-fail-in-Yarn-with-out-of-Java-heap-memory-error/m-p/18852#M41878</link>
      <description>I'm confused. Your initial post says that the reduce heap is 2457MB. Now it seems that's just 787.69MB. Which one is right? What does /etc/hadoop/conf/mapred-site.xml say?</description>
      <pubDate>Tue, 16 Sep 2014 17:56:36 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Jobs-fail-in-Yarn-with-out-of-Java-heap-memory-error/m-p/18852#M41878</guid>
      <dc:creator>bcwalrus</dc:creator>
      <dc:date>2014-09-16T17:56:36Z</dc:date>
    </item>
    <item>
      <title>Re: Jobs fail in Yarn with out of Java heap memory error</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Jobs-fail-in-Yarn-with-out-of-Java-heap-memory-error/m-p/18860#M41879</link>
      <description>&lt;P&gt;Here is what I have. Is the map/reduce java opts being overwritten, there are two entries?&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;lt;!--Autogenerated by Cloudera Manager--&amp;gt;&lt;BR /&gt;&amp;lt;configuration&amp;gt;&lt;BR /&gt;&amp;lt;property&amp;gt;&lt;BR /&gt;&amp;lt;name&amp;gt;mapreduce.job.split.metainfo.maxsize&amp;lt;/name&amp;gt;&lt;BR /&gt;&amp;lt;value&amp;gt;10000000&amp;lt;/value&amp;gt;&lt;BR /&gt;&amp;lt;/property&amp;gt;&lt;BR /&gt;&amp;lt;property&amp;gt;&lt;BR /&gt;&amp;lt;name&amp;gt;mapreduce.job.counters.max&amp;lt;/name&amp;gt;&lt;BR /&gt;&amp;lt;value&amp;gt;120&amp;lt;/value&amp;gt;&lt;BR /&gt;&amp;lt;/property&amp;gt;&lt;BR /&gt;&amp;lt;property&amp;gt;&lt;BR /&gt;&amp;lt;name&amp;gt;mapreduce.output.fileoutputformat.compress&amp;lt;/name&amp;gt;&lt;BR /&gt;&amp;lt;value&amp;gt;false&amp;lt;/value&amp;gt;&lt;BR /&gt;&amp;lt;/property&amp;gt;&lt;BR /&gt;&amp;lt;property&amp;gt;&lt;BR /&gt;&amp;lt;name&amp;gt;mapreduce.output.fileoutputformat.compress.type&amp;lt;/name&amp;gt;&lt;BR /&gt;&amp;lt;value&amp;gt;BLOCK&amp;lt;/value&amp;gt;&lt;BR /&gt;&amp;lt;/property&amp;gt;&lt;BR /&gt;&amp;lt;property&amp;gt;&lt;BR /&gt;&amp;lt;name&amp;gt;mapreduce.output.fileoutputformat.compress.codec&amp;lt;/name&amp;gt;&lt;BR /&gt;&amp;lt;value&amp;gt;org.apache.hadoop.io.compress.DefaultCodec&amp;lt;/value&amp;gt;&lt;BR /&gt;&amp;lt;/property&amp;gt;&lt;BR /&gt;&amp;lt;property&amp;gt;&lt;BR /&gt;&amp;lt;name&amp;gt;mapreduce.map.output.compress.codec&amp;lt;/name&amp;gt;&lt;BR /&gt;&amp;lt;value&amp;gt;org.apache.hadoop.io.compress.SnappyCodec&amp;lt;/value&amp;gt;&lt;BR /&gt;&amp;lt;/property&amp;gt;&lt;BR /&gt;&amp;lt;property&amp;gt;&lt;BR /&gt;&amp;lt;name&amp;gt;mapreduce.map.output.compress&amp;lt;/name&amp;gt;&lt;BR /&gt;&amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;&lt;BR /&gt;&amp;lt;/property&amp;gt;&lt;BR /&gt;&amp;lt;property&amp;gt;&lt;BR /&gt;&amp;lt;name&amp;gt;zlib.compress.level&amp;lt;/name&amp;gt;&lt;BR /&gt;&amp;lt;value&amp;gt;DEFAULT_COMPRESSION&amp;lt;/value&amp;gt;&lt;BR /&gt;&amp;lt;/property&amp;gt;&lt;BR /&gt;&amp;lt;property&amp;gt;&lt;BR /&gt;&amp;lt;name&amp;gt;mapreduce.task.io.sort.factor&amp;lt;/name&amp;gt;&lt;BR /&gt;&amp;lt;value&amp;gt;5&amp;lt;/value&amp;gt;&lt;BR /&gt;&amp;lt;/property&amp;gt;&lt;BR /&gt;&amp;lt;property&amp;gt;&lt;BR /&gt;&amp;lt;name&amp;gt;mapreduce.map.sort.spill.percent&amp;lt;/name&amp;gt;&lt;BR /&gt;&amp;lt;value&amp;gt;0.8&amp;lt;/value&amp;gt;&lt;BR /&gt;&amp;lt;/property&amp;gt;&lt;BR /&gt;&amp;lt;property&amp;gt;&lt;BR /&gt;&amp;lt;name&amp;gt;mapreduce.reduce.shuffle.parallelcopies&amp;lt;/name&amp;gt;&lt;BR /&gt;&amp;lt;value&amp;gt;4&amp;lt;/value&amp;gt;&lt;BR /&gt;&amp;lt;/property&amp;gt;&lt;BR /&gt;&amp;lt;property&amp;gt;&lt;BR /&gt;&amp;lt;name&amp;gt;mapreduce.task.timeout&amp;lt;/name&amp;gt;&lt;BR /&gt;&amp;lt;value&amp;gt;600000&amp;lt;/value&amp;gt;&lt;BR /&gt;&amp;lt;/property&amp;gt;&lt;BR /&gt;&amp;lt;property&amp;gt;&lt;BR /&gt;&amp;lt;name&amp;gt;mapreduce.client.submit.file.replication&amp;lt;/name&amp;gt;&lt;BR /&gt;&amp;lt;value&amp;gt;4&amp;lt;/value&amp;gt;&lt;BR /&gt;&amp;lt;/property&amp;gt;&lt;BR /&gt;&amp;lt;property&amp;gt;&lt;BR /&gt;&amp;lt;name&amp;gt;mapreduce.job.reduces&amp;lt;/name&amp;gt;&lt;BR /&gt;&amp;lt;value&amp;gt;2&amp;lt;/value&amp;gt;&lt;BR /&gt;&amp;lt;/property&amp;gt;&lt;BR /&gt;&amp;lt;property&amp;gt;&lt;BR /&gt;&amp;lt;name&amp;gt;mapreduce.task.io.sort.mb&amp;lt;/name&amp;gt;&lt;BR /&gt;&amp;lt;value&amp;gt;512&amp;lt;/value&amp;gt;&lt;BR /&gt;&amp;lt;/property&amp;gt;&lt;BR /&gt;&amp;lt;property&amp;gt;&lt;BR /&gt;&amp;lt;name&amp;gt;mapreduce.map.speculative&amp;lt;/name&amp;gt;&lt;BR /&gt;&amp;lt;value&amp;gt;false&amp;lt;/value&amp;gt;&lt;BR /&gt;&amp;lt;/property&amp;gt;&lt;BR /&gt;&amp;lt;property&amp;gt;&lt;BR /&gt;&amp;lt;name&amp;gt;mapreduce.reduce.speculative&amp;lt;/name&amp;gt;&lt;BR /&gt;&amp;lt;value&amp;gt;false&amp;lt;/value&amp;gt;&lt;BR /&gt;&amp;lt;/property&amp;gt;&lt;BR /&gt;&amp;lt;property&amp;gt;&lt;BR /&gt;&amp;lt;name&amp;gt;mapreduce.job.reduce.slowstart.completedmaps&amp;lt;/name&amp;gt;&lt;BR /&gt;&amp;lt;value&amp;gt;0.8&amp;lt;/value&amp;gt;&lt;BR /&gt;&amp;lt;/property&amp;gt;&lt;BR /&gt;&amp;lt;property&amp;gt;&lt;BR /&gt;&amp;lt;name&amp;gt;mapreduce.jobhistory.address&amp;lt;/name&amp;gt;&lt;BR /&gt;&amp;lt;value&amp;gt;blvdevhdp05.ds-iq.corp:10020&amp;lt;/value&amp;gt;&lt;BR /&gt;&amp;lt;/property&amp;gt;&lt;BR /&gt;&amp;lt;property&amp;gt;&lt;BR /&gt;&amp;lt;name&amp;gt;mapreduce.jobhistory.webapp.address&amp;lt;/name&amp;gt;&lt;BR /&gt;&amp;lt;value&amp;gt;blvdevhdp05.ds-iq.corp:19888&amp;lt;/value&amp;gt;&lt;BR /&gt;&amp;lt;/property&amp;gt;&lt;BR /&gt;&amp;lt;property&amp;gt;&lt;BR /&gt;&amp;lt;name&amp;gt;mapreduce.framework.name&amp;lt;/name&amp;gt;&lt;BR /&gt;&amp;lt;value&amp;gt;yarn&amp;lt;/value&amp;gt;&lt;BR /&gt;&amp;lt;/property&amp;gt;&lt;BR /&gt;&amp;lt;property&amp;gt;&lt;BR /&gt;&amp;lt;name&amp;gt;yarn.app.mapreduce.am.staging-dir&amp;lt;/name&amp;gt;&lt;BR /&gt;&amp;lt;value&amp;gt;/user&amp;lt;/value&amp;gt;&lt;BR /&gt;&amp;lt;/property&amp;gt;&lt;BR /&gt;&amp;lt;property&amp;gt;&lt;BR /&gt;&amp;lt;name&amp;gt;yarn.app.mapreduce.am.resource.mb&amp;lt;/name&amp;gt;&lt;BR /&gt;&amp;lt;value&amp;gt;1536&amp;lt;/value&amp;gt;&lt;BR /&gt;&amp;lt;/property&amp;gt;&lt;BR /&gt;&amp;lt;property&amp;gt;&lt;BR /&gt;&amp;lt;name&amp;gt;yarn.app.mapreduce.am.resource.cpu-vcores&amp;lt;/name&amp;gt;&lt;BR /&gt;&amp;lt;value&amp;gt;1&amp;lt;/value&amp;gt;&lt;BR /&gt;&amp;lt;/property&amp;gt;&lt;BR /&gt;&amp;lt;property&amp;gt;&lt;BR /&gt;&amp;lt;name&amp;gt;mapreduce.job.ubertask.enabled&amp;lt;/name&amp;gt;&lt;BR /&gt;&amp;lt;value&amp;gt;false&amp;lt;/value&amp;gt;&lt;BR /&gt;&amp;lt;/property&amp;gt;&lt;BR /&gt;&amp;lt;property&amp;gt;&lt;BR /&gt;&amp;lt;name&amp;gt;yarn.app.mapreduce.am.command-opts&amp;lt;/name&amp;gt;&lt;BR /&gt;&amp;lt;value&amp;gt;-Djava.net.preferIPv4Stack=true -Xmx825955249&amp;lt;/value&amp;gt;&lt;BR /&gt;&amp;lt;/property&amp;gt;&lt;BR /&gt;&amp;lt;property&amp;gt;&lt;BR /&gt;&amp;lt;name&amp;gt;mapreduce.map.java.opts&amp;lt;/name&amp;gt;&lt;BR /&gt;&amp;lt;value&amp;gt;-Djava.net.preferIPv4Stack=true&lt;STRONG&gt; -Xmx768m -Xmx825955249&lt;/STRONG&gt;&amp;lt;/value&amp;gt;&lt;BR /&gt;&amp;lt;/property&amp;gt;&lt;BR /&gt;&amp;lt;property&amp;gt;&lt;BR /&gt;&amp;lt;name&amp;gt;mapreduce.reduce.java.opts&amp;lt;/name&amp;gt;&lt;BR /&gt;&amp;lt;value&amp;gt;-Djava.net.preferIPv4Stack=true&lt;STRONG&gt; -Xmx1280m -Xmx825955249&lt;/STRONG&gt;&amp;lt;/value&amp;gt;&lt;BR /&gt;&amp;lt;/property&amp;gt;&lt;BR /&gt;&amp;lt;property&amp;gt;&lt;BR /&gt;&amp;lt;name&amp;gt;mapreduce.map.memory.mb&amp;lt;/name&amp;gt;&lt;BR /&gt;&amp;lt;value&amp;gt;1280&amp;lt;/value&amp;gt;&lt;BR /&gt;&amp;lt;/property&amp;gt;&lt;BR /&gt;&amp;lt;property&amp;gt;&lt;BR /&gt;&amp;lt;name&amp;gt;mapreduce.map.cpu.vcores&amp;lt;/name&amp;gt;&lt;BR /&gt;&amp;lt;value&amp;gt;1&amp;lt;/value&amp;gt;&lt;BR /&gt;&amp;lt;/property&amp;gt;&lt;BR /&gt;&amp;lt;property&amp;gt;&lt;BR /&gt;&amp;lt;name&amp;gt;mapreduce.reduce.memory.mb&amp;lt;/name&amp;gt;&lt;BR /&gt;&amp;lt;value&amp;gt;1792&amp;lt;/value&amp;gt;&lt;BR /&gt;&amp;lt;/property&amp;gt;&lt;BR /&gt;&amp;lt;property&amp;gt;&lt;BR /&gt;&amp;lt;name&amp;gt;mapreduce.reduce.cpu.vcores&amp;lt;/name&amp;gt;&lt;BR /&gt;&amp;lt;value&amp;gt;1&amp;lt;/value&amp;gt;&lt;BR /&gt;&amp;lt;/property&amp;gt;&lt;BR /&gt;&amp;lt;property&amp;gt;&lt;BR /&gt;&amp;lt;name&amp;gt;mapreduce.application.classpath&amp;lt;/name&amp;gt;&lt;BR /&gt;&amp;lt;value&amp;gt;$HADOOP_MAPRED_HOME/*,$HADOOP_MAPRED_HOME/lib/*,$MR2_CLASSPATH&amp;lt;/value&amp;gt;&lt;BR /&gt;&amp;lt;/property&amp;gt;&lt;BR /&gt;&amp;lt;property&amp;gt;&lt;BR /&gt;&amp;lt;name&amp;gt;mapreduce.admin.user.env&amp;lt;/name&amp;gt;&lt;BR /&gt;&amp;lt;value&amp;gt;LD_LIBRARY_PATH=$HADOOP_COMMON_HOME/lib/native:$JAVA_LIBRARY_PATH&amp;lt;/value&amp;gt;&lt;BR /&gt;&amp;lt;/property&amp;gt;&lt;BR /&gt;&amp;lt;property&amp;gt;&lt;BR /&gt;&amp;lt;name&amp;gt;mapreduce.shuffle.max.connections&amp;lt;/name&amp;gt;&lt;BR /&gt;&amp;lt;value&amp;gt;80&amp;lt;/value&amp;gt;&lt;BR /&gt;&amp;lt;/property&amp;gt;&lt;BR /&gt;&amp;lt;/configuration&amp;gt;&lt;/P&gt;</description>
      <pubDate>Tue, 16 Sep 2014 18:42:14 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Jobs-fail-in-Yarn-with-out-of-Java-heap-memory-error/m-p/18860#M41879</guid>
      <dc:creator>IT.Services</dc:creator>
      <dc:date>2014-09-16T18:42:14Z</dc:date>
    </item>
    <item>
      <title>Re: Jobs fail in Yarn with out of Java heap memory error</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Jobs-fail-in-Yarn-with-out-of-Java-heap-memory-error/m-p/18862#M41880</link>
      <description>I have an easy question, if I increase mapreduce.reduce.shuffle.parallelcopies from 4 to 10, will that increase or decrease memory used by the node?&lt;BR /&gt;&lt;BR /&gt;It seems to me that if this is increased that data is written quickly to files and out of memory. But I might be wrong...</description>
      <pubDate>Tue, 16 Sep 2014 19:05:59 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Jobs-fail-in-Yarn-with-out-of-Java-heap-memory-error/m-p/18862#M41880</guid>
      <dc:creator>IT.Services</dc:creator>
      <dc:date>2014-09-16T19:05:59Z</dc:date>
    </item>
    <item>
      <title>Re: Jobs fail in Yarn with out of Java heap memory error</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Jobs-fail-in-Yarn-with-out-of-Java-heap-memory-error/m-p/18906#M41881</link>
      <description>&amp;lt;name&amp;gt;mapreduce.reduce.java.opts&amp;lt;/name&amp;gt;&lt;BR /&gt;&amp;lt;value&amp;gt;-Djava.net.preferIPv4Stack=true -Xmx1280m -Xmx825955249&amp;lt;/value&amp;gt;&lt;BR /&gt;&lt;BR /&gt;Limits the heap to ~825MB. Most JVMs resolve duplicate args by picking the last one. So this is nowhere close to the 3GB that you intended. You should find out where you set this in CM and change it.&lt;BR /&gt;&lt;BR /&gt;Do that before you play with parallelcopies. But to answer your questions, yes, it'll increase CPU, memory &amp;amp; network usage. And it could lead to more disk spills and slow down your job.</description>
      <pubDate>Wed, 17 Sep 2014 17:12:47 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Jobs-fail-in-Yarn-with-out-of-Java-heap-memory-error/m-p/18906#M41881</guid>
      <dc:creator>bcwalrus</dc:creator>
      <dc:date>2014-09-17T17:12:47Z</dc:date>
    </item>
    <item>
      <title>Re: Jobs fail in Yarn with out of Java heap memory error</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Jobs-fail-in-Yarn-with-out-of-Java-heap-memory-error/m-p/18910#M41882</link>
      <description>&lt;P&gt;You pointed out the problem and I removed the -Xmx825955249 from where I had entered it in Cloudera Manager. I was using the wrong field to update the value.&amp;nbsp;Thank you so much for sticking with me and helping me resolve this issue! The jobs now succeed!&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Kevin Verhoeven&lt;/P&gt;</description>
      <pubDate>Wed, 17 Sep 2014 18:25:02 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Jobs-fail-in-Yarn-with-out-of-Java-heap-memory-error/m-p/18910#M41882</guid>
      <dc:creator>IT.Services</dc:creator>
      <dc:date>2014-09-17T18:25:02Z</dc:date>
    </item>
  </channel>
</rss>

