<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Map and Reduce Error: Java heap space in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Map-and-Reduce-Error-Java-heap-space/m-p/46029#M42535</link>
    <description>Sounds good mate</description>
    <pubDate>Fri, 07 Oct 2016 06:37:24 GMT</pubDate>
    <dc:creator>csguna</dc:creator>
    <dc:date>2016-10-07T06:37:24Z</dc:date>
    <item>
      <title>Map and Reduce Error: Java heap space</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Map-and-Reduce-Error-Java-heap-space/m-p/45874#M42531</link>
      <description>&lt;P&gt;I'm using QuickStart VM with CHD5.3, trying to run modified sample from MR-parquet read. It is worked OK on 10M rows parquet table, but I've got "Java heap space" error on table having 40M rows:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT face="courier new,courier" size="2"&gt;[cloudera@quickstart sep]$ yarn jar testmr-1.0-SNAPSHOT.jar TestReadParquet /user/hive/warehouse/parquet_table out_file18 -Dmapreduce.reduce.memory.mb=5120 -Dmapreduce.reduce.java.opts=-Xmx4608m -Dmapreduce.map.memory.mb=5120 -Dmapreduce.map.java.opts=-Xmx4608m&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier" size="2"&gt;16/10/03 12:19:30 INFO client.RMProxy: Connecting to ResourceManager at quickstart.cloudera/127.0.0.1:8032&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier" size="2"&gt;16/10/03 12:19:31 INFO input.FileInputFormat: Total input paths to process : 1&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier" size="2"&gt;Oct 03, 2016 12:19:31 PM parquet.Log info&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier" size="2"&gt;INFO: Total input paths to process : 1&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier" size="2"&gt;Oct 03, 2016 12:19:31 PM parquet.Log info&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier" size="2"&gt;INFO: Initiating action with parallelism: 5&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier" size="2"&gt;Oct 03, 2016 12:19:31 PM parquet.Log info&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier" size="2"&gt;INFO: reading another 1 footers&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier" size="2"&gt;Oct 03, 2016 12:19:31 PM parquet.Log info&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier" size="2"&gt;INFO: Initiating action with parallelism: 5&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier" size="2"&gt;SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier" size="2"&gt;SLF4J: Defaulting to no-operation (NOP) logger implementation&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier" size="2"&gt;SLF4J: See &lt;A href="http://www.slf4j.org/codes.html#StaticLoggerBinder" target="_blank"&gt;http://www.slf4j.org/codes.html#StaticLoggerBinder&lt;/A&gt; for further details.&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier" size="2"&gt;16/10/03 12:19:31 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier" size="2"&gt;16/10/03 12:19:31 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier" size="2"&gt;Oct 03, 2016 12:19:31 PM parquet.Log info&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier" size="2"&gt;INFO: There were no row groups that could be dropped due to filter predicates&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier" size="2"&gt;16/10/03 12:19:32 INFO mapreduce.JobSubmitter: number of splits:1&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier" size="2"&gt;16/10/03 12:19:32 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1475517800829_0009&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier" size="2"&gt;16/10/03 12:19:33 INFO impl.YarnClientImpl: Submitted application application_1475517800829_0009&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier" size="2"&gt;16/10/03 12:19:33 INFO mapreduce.Job: The url to track the job: &lt;A href="http://quickstart.cloudera:8088/proxy/application_1475517800829_0009/" target="_blank"&gt;http://quickstart.cloudera:8088/proxy/application_1475517800829_0009/&lt;/A&gt;&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier" size="2"&gt;16/10/03 12:19:33 INFO mapreduce.Job: Running job: job_1475517800829_0009&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier" size="2"&gt;16/10/03 12:19:47 INFO mapreduce.Job: Job job_1475517800829_0009 running in uber mode : false&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier" size="2"&gt;16/10/03 12:19:47 INFO mapreduce.Job: map 0% reduce 0%&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier" size="2"&gt;16/10/03 12:20:57 INFO mapreduce.Job: map 100% reduce 0%&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier" size="2"&gt;16/10/03 12:20:57 INFO mapreduce.Job: Task Id : attempt_1475517800829_0009_m_000000_0, Status : FAILED&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier" size="2"&gt;Error: Java heap space&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier" size="2"&gt;Container killed by the ApplicationMaster.&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier" size="2"&gt;Container killed on request. Exit code is 143&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier" size="2"&gt;Container exited with a non-zero exit code 143&lt;/FONT&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;Also I've tryed to edit /etc/hadoop/conf/mapred-site.xml, tryed via cloudera manager GUI (clusters-&amp;gt;hdfs-&amp;gt; ... &lt;SPAN class="cmfParamName"&gt;Java Heap Size of DataNode in Bytes&lt;/SPAN&gt; )&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT face="courier new,courier" size="2"&gt;[cloudera@quickstart sep]$ free -m&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier" size="2"&gt;total used free shared buffers cached&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier" size="2"&gt;Mem: 13598 13150 447 0 23 206&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier" size="2"&gt;-/+ buffers/cache: 12920 677&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier" size="2"&gt;Swap: 6015 2187 3828&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Mapper class:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;PRE&gt;&lt;SPAN&gt;public static class &lt;/SPAN&gt;MyMap &lt;SPAN&gt;extends&lt;BR /&gt;&lt;/SPAN&gt;        Mapper&amp;lt;LongWritable, Group, NullWritable, Text&amp;gt; {&lt;BR /&gt;&lt;BR /&gt;    &lt;SPAN&gt;@Override&lt;BR /&gt;&lt;/SPAN&gt;    &lt;SPAN&gt;public void &lt;/SPAN&gt;map(LongWritable key, Group value, Context context) &lt;SPAN&gt;throws &lt;/SPAN&gt;IOException, InterruptedException {&lt;BR /&gt;        NullWritable outKey = NullWritable.&lt;SPAN&gt;get&lt;/SPAN&gt;();&lt;BR /&gt;        String outputRecord = &lt;SPAN&gt;""&lt;/SPAN&gt;;&lt;BR /&gt;        &lt;SPAN&gt;// Get the schema and field values of the record&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;        // String inputRecord = value.toString();&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;        // Process the value, create an output record&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;        // ...&lt;BR /&gt;&lt;/SPAN&gt;        &lt;SPAN&gt;int &lt;/SPAN&gt;field1 = value.getInteger(&lt;SPAN&gt;"x"&lt;/SPAN&gt;, &lt;SPAN&gt;0&lt;/SPAN&gt;);&lt;BR /&gt;&lt;BR /&gt;        &lt;SPAN&gt;if &lt;/SPAN&gt;(field1 &amp;lt; &lt;SPAN&gt;3&lt;/SPAN&gt;) {&lt;BR /&gt;            context.write(outKey, &lt;SPAN&gt;new &lt;/SPAN&gt;Text(outputRecord));&lt;BR /&gt;        }&lt;BR /&gt;    }&lt;BR /&gt;}&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 16 Sep 2022 10:43:01 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Map-and-Reduce-Error-Java-heap-space/m-p/45874#M42531</guid>
      <dc:creator>Triffids</dc:creator>
      <dc:date>2022-09-16T10:43:01Z</dc:date>
    </item>
    <item>
      <title>Re: Map and Reduce Error: Java heap space</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Map-and-Reduce-Error-Java-heap-space/m-p/45997#M42532</link>
      <description>&lt;P&gt;hadoop-cmf-yarn-NODEMANAGER-quickstart.cloudera.log.out:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;2016-10-03 12:22:14,533 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 18309 for container-id container_1475517800829_0009_01_000005: 130.2 MB of 3 GB physical memory used; 859.9 MB of 6.3 GB virtual memory used
2016-10-03 12:22:28,045 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 16676 for container-id container_1475517800829_0009_01_000001: 178.8 MB of 1 GB physical memory used; 931.1 MB of 2.1 GB virtual memory used
2016-10-03 12:22:31,303 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 18309 for container-id container_1475517800829_0009_01_000005: 128.8 MB of 3 GB physical memory used; 859.9 MB of 6.3 GB virtual memory used
2016-10-03 12:22:46,965 WARN org.apache.hadoop.yarn.util.ProcfsBasedProcessTree: Error reading the stream java.io.IOException: No such process
2016-10-03 12:22:46,966 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 16676 for container-id container_1475517800829_0009_01_000001: 179.0 MB of 1 GB physical memory used; 931.1 MB of 2.1 GB virtual memory used
2016-10-03 12:22:47,122 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Stopping container with container Id: container_1475517800829_0009_01_000005&lt;/PRE&gt;</description>
      <pubDate>Thu, 06 Oct 2016 11:59:47 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Map-and-Reduce-Error-Java-heap-space/m-p/45997#M42532</guid>
      <dc:creator>Triffids</dc:creator>
      <dc:date>2016-10-06T11:59:47Z</dc:date>
    </item>
    <item>
      <title>Re: Map and Reduce Error: Java heap space</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Map-and-Reduce-Error-Java-heap-space/m-p/45998#M42533</link>
      <description>&lt;P&gt;Please add some more memory by editing the mapred-site.xml&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;&amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;mapred.child.java.opts&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;-Xmx4096m&amp;lt;/value&amp;gt;
&amp;lt;/property&amp;gt;&lt;/PRE&gt;&lt;P&gt;The above tag i have used 5gb.&lt;/P&gt;&lt;P&gt;Let me know if that helped you&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;alternatively you can also edit the hadoop-env.sh file&amp;nbsp;&lt;/P&gt;&lt;P&gt;add&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;export HADOOP_OPTS="-Xmx5096m"&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 06 Oct 2016 12:10:25 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Map-and-Reduce-Error-Java-heap-space/m-p/45998#M42533</guid>
      <dc:creator>csguna</dc:creator>
      <dc:date>2016-10-06T12:10:25Z</dc:date>
    </item>
    <item>
      <title>Re: Map and Reduce Error: Java heap space</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Map-and-Reduce-Error-Java-heap-space/m-p/46027#M42534</link>
      <description>&lt;P&gt;Thanks !&amp;nbsp;mapred.child.java.opts in&amp;nbsp;&lt;SPAN&gt;mapred-site.xml solved the issue&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 07 Oct 2016 06:05:54 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Map-and-Reduce-Error-Java-heap-space/m-p/46027#M42534</guid>
      <dc:creator>Triffids</dc:creator>
      <dc:date>2016-10-07T06:05:54Z</dc:date>
    </item>
    <item>
      <title>Re: Map and Reduce Error: Java heap space</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Map-and-Reduce-Error-Java-heap-space/m-p/46029#M42535</link>
      <description>Sounds good mate</description>
      <pubDate>Fri, 07 Oct 2016 06:37:24 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Map-and-Reduce-Error-Java-heap-space/m-p/46029#M42535</guid>
      <dc:creator>csguna</dc:creator>
      <dc:date>2016-10-07T06:37:24Z</dc:date>
    </item>
    <item>
      <title>Re: Map and Reduce Error: Java heap space</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Map-and-Reduce-Error-Java-heap-space/m-p/47023#M42536</link>
      <description>&lt;P&gt;If you need more details, pls refer below&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;mapred.map.child.java.opts is for Hadoop 1.x&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Those who are using Hadoop 2.x, pls use the below parameters instead&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;mapreduce.map.java.opts=-Xmx4g &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;SPAN&gt;# Note: 4 GB&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;mapreduce.reduce.java.opts=-Xmx4g &amp;nbsp; &amp;nbsp; # Note: 4 GB&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Also when you set java.opts, you need to note two important points&lt;/P&gt;&lt;P&gt;1. It has dependency on memory.mb, so always try to set java.opts upto 80% of memory.mb&lt;/P&gt;&lt;P&gt;2. Follow the "&lt;SPAN&gt;-&lt;/SPAN&gt;&lt;SPAN&gt;Xmx4g" format for opt but numerical value for memory.mb&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;mapreduce.map.memory.mb = 5012 &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&lt;SPAN&gt;# &amp;nbsp;Note: 5 GB&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;mapreduce.reduce.memory.mb = &lt;SPAN&gt;5012 &amp;nbsp; &amp;nbsp;# Note: 5 GB&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Finally,&amp;nbsp;some organization will not allow you to alter mapred-site.xml directly or via CM. Also we need thease kind of setup only to handle very big tables, so it is not recommanded to alter the configuration only for few tables..so you can do this setup temporarly by following below steps:&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;1. From HDFS:&lt;/P&gt;&lt;P&gt;HDFS&amp;gt; export HIVE_OPTS="-hiveconf mapreduce.map.memory.mb=5120 -hiveconf mapreduce.reduce.memory.mb=5120 -hiveconf mapreduce.map.java.opts=-Xmx4g -hiveconf mapreduce.reduce.java.opts=-Xmx4g"&lt;/P&gt;&lt;P&gt;2. From Hive:&lt;/P&gt;&lt;P&gt;hive&amp;gt; set&amp;nbsp;&lt;SPAN&gt;mapreduce.map.memory.mb=5120;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;hive&amp;gt; set&amp;nbsp;&lt;SPAN&gt;mapreduce.reduce.memory.mb=5120;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;hive&amp;gt; set&amp;nbsp;mapreduce.map.java.opts=-Xmx4g;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;hive&amp;gt; set&amp;nbsp;mapreduce.reduce.java.opts=-Xmx4g;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Note: HIVE_OPTS is to handle only HIVE, if you need similar setup for HADOOP then use HADOOP_OPTS&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Thanks&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Kumar&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 04 Nov 2016 01:20:50 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Map-and-Reduce-Error-Java-heap-space/m-p/47023#M42536</guid>
      <dc:creator>saranvisa</dc:creator>
      <dc:date>2016-11-04T01:20:50Z</dc:date>
    </item>
    <item>
      <title>Re: Map and Reduce Error: Java heap space</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Map-and-Reduce-Error-Java-heap-space/m-p/53848#M42537</link>
      <description>i would like to know the location of this file , because i found many files of mapred-site;thx again</description>
      <pubDate>Wed, 19 Apr 2017 13:01:54 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Map-and-Reduce-Error-Java-heap-space/m-p/53848#M42537</guid>
      <dc:creator>onsbt</dc:creator>
      <dc:date>2017-04-19T13:01:54Z</dc:date>
    </item>
    <item>
      <title>Re: Map and Reduce Error: Java heap space</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Map-and-Reduce-Error-Java-heap-space/m-p/53853#M42538</link>
      <description>&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/21527"&gt;@onsbt&lt;/a&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;In general,&amp;nbsp;the path is&amp;nbsp;/etc/hadoop/conf&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;But I would recommend you to not update this file directly, instead update via Cloudera manager -&amp;gt; Yarn -&amp;gt; Configuration. if you&amp;nbsp;are not using CM ask your admin&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Also an another recommendation is, you can set those values 'temporarily' &amp;amp; directly in HDFS/Hive and test &amp;nbsp;to find the suitable value for your environment before you make the permanent change in configuration file&lt;/P&gt;</description>
      <pubDate>Wed, 19 Apr 2017 13:51:34 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Map-and-Reduce-Error-Java-heap-space/m-p/53853#M42538</guid>
      <dc:creator>saranvisa</dc:creator>
      <dc:date>2017-04-19T13:51:34Z</dc:date>
    </item>
    <item>
      <title>Re: Map and Reduce Error: Java heap space</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Map-and-Reduce-Error-Java-heap-space/m-p/53855#M42539</link>
      <description>&lt;P&gt;thanks for replying , actually i want to decrase the size of heap mmemory of HDFS and kafka do you have any propositions ?i mofied the /opt/cloudera/parcels/KAFKA-2.1.1-1.2.1.1.p0.18/lib/kafka/bin/kafka-run-class.sh file but this does'nt give me any result&lt;/P&gt;&lt;P&gt;any help plz ?&lt;/P&gt;</description>
      <pubDate>Wed, 19 Apr 2017 13:59:03 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Map-and-Reduce-Error-Java-heap-space/m-p/53855#M42539</guid>
      <dc:creator>onsbt</dc:creator>
      <dc:date>2017-04-19T13:59:03Z</dc:date>
    </item>
    <item>
      <title>Re: Map and Reduce Error: Java heap space</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Map-and-Reduce-Error-Java-heap-space/m-p/53856#M42540</link>
      <description>&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/21527"&gt;@onsbt&lt;/a&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;In general a service restart required after any configuration change&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Again as I mentioned, it is recommended to update any configuration change via CM&lt;/P&gt;</description>
      <pubDate>Wed, 19 Apr 2017 14:02:26 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Map-and-Reduce-Error-Java-heap-space/m-p/53856#M42540</guid>
      <dc:creator>saranvisa</dc:creator>
      <dc:date>2017-04-19T14:02:26Z</dc:date>
    </item>
    <item>
      <title>Re: Map and Reduce Error: Java heap space</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Map-and-Reduce-Error-Java-heap-space/m-p/53913#M42541</link>
      <description>thanks for replying ,the problem was solved i would like to ask another question , because after research i didn't found the solution , so after installing cloudera manager i got a problem with HDFS "hdfs: Problèmes d'état d'intégrité&lt;BR /&gt;HDFS&lt;BR /&gt;Blocs sous-répliqués " do you have an idea about the solution ?</description>
      <pubDate>Thu, 20 Apr 2017 09:41:34 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Map-and-Reduce-Error-Java-heap-space/m-p/53913#M42541</guid>
      <dc:creator>onsbt</dc:creator>
      <dc:date>2017-04-20T09:41:34Z</dc:date>
    </item>
    <item>
      <title>Re: Map and Reduce Error: Java heap space</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Map-and-Reduce-Error-Java-heap-space/m-p/53926#M42542</link>
      <description>&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/21527"&gt;@onsbt&lt;/a&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Can you translate your issue in english? also if it is not related to Java heap space, I would recommend you to create a new thread instead so that it is easy to track and others to contribute as well&lt;/P&gt;</description>
      <pubDate>Thu, 20 Apr 2017 13:47:47 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Map-and-Reduce-Error-Java-heap-space/m-p/53926#M42542</guid>
      <dc:creator>saranvisa</dc:creator>
      <dc:date>2017-04-20T13:47:47Z</dc:date>
    </item>
    <item>
      <title>Re: Map and Reduce Error: Java heap space</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Map-and-Reduce-Error-Java-heap-space/m-p/53976#M42543</link>
      <description>&lt;P&gt;&lt;SPAN&gt;thanks for replying&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;i &amp;nbsp;updated this file directly, instead &amp;nbsp;via Cloudera manager, and i resolve my problem now &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt; thank you so much , but i have another question i&amp;nbsp;Iam running cloudera&amp;nbsp;with default configuration with one-node cluster, and would like to find where HDFS stores files locally.i create a file in hdfs with hue but when i see /dfs/nn it's empty i can't find the file that i have already created&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 21 Apr 2017 07:29:01 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Map-and-Reduce-Error-Java-heap-space/m-p/53976#M42543</guid>
      <dc:creator>onsbt</dc:creator>
      <dc:date>2017-04-21T07:29:01Z</dc:date>
    </item>
    <item>
      <title>Re: Map and Reduce Error: Java heap space</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Map-and-Reduce-Error-Java-heap-space/m-p/53991#M42544</link>
      <description>&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/21527"&gt;@onsbt&lt;/a&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The default path is&amp;nbsp;&lt;SPAN&gt;/opt/hadoop/dfs/nn&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;You can confirm this by Cloudera manager -&amp;gt; HDFS -&amp;gt; Configuration -&amp;gt;&amp;nbsp;&lt;SPAN&gt;search for "dfs.namenode.name.dir"&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 21 Apr 2017 14:14:04 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Map-and-Reduce-Error-Java-heap-space/m-p/53991#M42544</guid>
      <dc:creator>saranvisa</dc:creator>
      <dc:date>2017-04-21T14:14:04Z</dc:date>
    </item>
    <item>
      <title>Re: Map and Reduce Error: Java heap space</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Map-and-Reduce-Error-Java-heap-space/m-p/53993#M42545</link>
      <description>&lt;P&gt;the path /opt/hadoop/dfs/nn does not exist ,&lt;BR /&gt;and when i look for the file that i already created i can't find it in the path&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 21 Apr 2017 14:24:53 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Map-and-Reduce-Error-Java-heap-space/m-p/53993#M42545</guid>
      <dc:creator>onsbt</dc:creator>
      <dc:date>2017-04-21T14:24:53Z</dc:date>
    </item>
    <item>
      <title>Re: Map and Reduce Error: Java heap space</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Map-and-Reduce-Error-Java-heap-space/m-p/53994#M42546</link>
      <description>&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/21527"&gt;@onsbt&lt;/a&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;As mentioned already, please create a new topic for new issue as it may mislead others&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Also please check the full&amp;nbsp;answer&amp;nbsp;and reply, so that you will get desired answer&lt;/P&gt;</description>
      <pubDate>Fri, 21 Apr 2017 14:28:24 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Map-and-Reduce-Error-Java-heap-space/m-p/53994#M42546</guid>
      <dc:creator>saranvisa</dc:creator>
      <dc:date>2017-04-21T14:28:24Z</dc:date>
    </item>
    <item>
      <title>Re: Map and Reduce Error: Java heap space</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Map-and-Reduce-Error-Java-heap-space/m-p/60328#M42547</link>
      <description>&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/18441"&gt;@saranvisa&lt;/a&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The last reducer of my mapreduce job fails with the below error.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;2017-09-20 16:23:23,732 FATAL [main] org.apache.hadoop.mapred.YarnChild: Error running child : java.lang.OutOfMemoryError: GC overhead limit exceeded
	at java.util.regex.Matcher.&amp;lt;init&amp;gt;(Matcher.java:224)
	at java.util.regex.Pattern.matcher(Pattern.java:1088)
	at java.lang.String.replaceAll(String.java:2162)
	at com.sas.ci.acs.extract.CXAService$myReduce.parseEvent(CXAService.java:1612)
	at com.sas.ci.acs.extract.CXAService$myReduce.reduce(CXAService.java:919)
	at com.sas.ci.acs.extract.CXAService$myReduce.reduce(CXAService.java:237)
	at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:444)
	at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
	at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
	at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

2017-09-20 16:23:23,834 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping ReduceTask metrics system...
2017-09-20 16:23:23,834 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: ReduceTask metrics system stopped.
2017-09-20 16:23:23,834 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: ReduceTask metrics system shutdown complete.&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Current settings:&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;TABLE&gt;&lt;TBODY&gt;&lt;TR&gt;&lt;TD&gt;mapreduce.map.java.opts&lt;/TD&gt;&lt;TD&gt;-Djava.net.preferIPv4Stack=true -Xmx3865051136&lt;/TD&gt;&lt;/TR&gt;&lt;/TBODY&gt;&lt;/TABLE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;TABLE&gt;&lt;TBODY&gt;&lt;TR&gt;&lt;TD&gt;mapreduce.reduce.java.opts&lt;/TD&gt;&lt;TD&gt;-Djava.net.preferIPv4Stack=true -Xmx6144067296&lt;/TD&gt;&lt;/TR&gt;&lt;/TBODY&gt;&lt;/TABLE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;1) do you recommend increasing the following properties to the below values ?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;"mapreduce.map.java.opts","-Xmx4g"&amp;nbsp;&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;"mapreduce.reduce.java.opts","-Xmx8g"&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;2) These are my map and reduce memory current settings. Do i also need to bump up my reduce memory to 10240m ?&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;mapreduce.reduce.memory.mb 8192&lt;BR /&gt;mapreduce.reduce.memory.mb 8192&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 26 Sep 2017 13:42:45 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Map-and-Reduce-Error-Java-heap-space/m-p/60328#M42547</guid>
      <dc:creator>desind</dc:creator>
      <dc:date>2017-09-26T13:42:45Z</dc:date>
    </item>
    <item>
      <title>Re: Map and Reduce Error: Java heap space</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Map-and-Reduce-Error-Java-heap-space/m-p/60339#M42548</link>
      <description>&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/21997"&gt;@desind&lt;/a&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I will not recommend&amp;nbsp;to change your settings, instead you can pass the memory &amp;amp; java Opt when you execute your Jar.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Ex: Below are some sample value, you can change it as needed.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;hadoop jar ${JAR_PATH} ${CONFIG_PATH}/filename.xml ${ENV} ${ODATE} mapMem=12288 mapJavaOpts=Xmx9830 redurMem=12288 redurJavaOpts=Xmx9830&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Note:&lt;/P&gt;&lt;P&gt;mapJavaopts = mapMem * 0.8&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;redurJavaOpts =&amp;nbsp;redurMem * 0.8&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 26 Sep 2017 19:07:15 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Map-and-Reduce-Error-Java-heap-space/m-p/60339#M42548</guid>
      <dc:creator>saranvisa</dc:creator>
      <dc:date>2017-09-26T19:07:15Z</dc:date>
    </item>
    <item>
      <title>Re: Map and Reduce Error: Java heap space</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Map-and-Reduce-Error-Java-heap-space/m-p/60678#M42549</link>
      <description>&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/18441"&gt;@saranvisa&lt;/a&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN class="login-bold"&gt;What are the implications for increasing mapreduce/reduce.memory.mb and mapreduce.reduce.java.opts to a higher value in the cluster itself ?&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN class="login-bold"&gt;One of them would be that jobs that do not need this additional memory will get it. which is of no use&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN class="login-bold"&gt;Other jobs during that time may&amp;nbsp;be impacted&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Anything else ?&lt;/P&gt;</description>
      <pubDate>Thu, 05 Oct 2017 19:31:31 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Map-and-Reduce-Error-Java-heap-space/m-p/60678#M42549</guid>
      <dc:creator>desind</dc:creator>
      <dc:date>2017-10-05T19:31:31Z</dc:date>
    </item>
    <item>
      <title>Re: Map and Reduce Error: Java heap space</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Map-and-Reduce-Error-Java-heap-space/m-p/60702#M42550</link>
      <description>&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/21997"&gt;@desind&lt;/a&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;To add on to your point, the cluster setup is applicable to all the mapreduce job, so it&amp;nbsp;may impact other non-mapreduce&amp;nbsp;jobs.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;In fact I am not against&amp;nbsp;setup higher value in cluster itself, but&amp;nbsp;you can do that based on how many jobs requires higher values and performance, etc&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 06 Oct 2017 16:11:21 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Map-and-Reduce-Error-Java-heap-space/m-p/60702#M42550</guid>
      <dc:creator>saranvisa</dc:creator>
      <dc:date>2017-10-06T16:11:21Z</dc:date>
    </item>
  </channel>
</rss>

