- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Mapreduce - GC overhead limit exceeded
- Labels:
-
Apache Hadoop
Created ‎06-19-2016 02:12 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
When we run a mapreduce job we re getting GC overhead limit exceeded error during the map phase and the job gets terminated. Please let us know how this can be resolved?
Error: GC overhead limit exceeded 16/06/19 17:34:39 INFO mapreduce.Job: map 18% reduce 0% 16/06/19 17:36:42 INFO mapreduce.Job: map 19% reduce 0% 16/06/19 17:37:18 INFO mapreduce.Job: Task Id : attempt_1466342436828_0001_m_000008_2, Status : FAILED Error: Java heap space
Regards,
Venkadesh S
Created ‎06-19-2016 02:22 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
looks your mapred.child.java.opts is insufficient to run the job,try running this job again after increasing mapred.child.java.opts value.
Created ‎06-19-2016 02:22 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
looks your mapred.child.java.opts is insufficient to run the job,try running this job again after increasing mapred.child.java.opts value.
Created ‎06-19-2016 02:32 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks ..@Rajkumar Singh .. @Benjamin Leonhardi
Below are my settings in the cluster.
Map Memory : 8192
Sort Allocation Memory : 2047
MR Map Java Heap Size : -Xmx8192mmapreduce.admin.map.child.java.opts & mapred.child.java.opts : -server -XX:NewRatio=8 -Djava.net.preferIPv4Stack=true -Dhdp.version=${hdp.version}
I havent found mapred.child.java.opts through Ambari.
Created ‎06-19-2016 03:02 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
normally mapper dont fail with OOM and 8192M is pretty good, I suspect that if you have some big records while reading from csv, are you doing some memory intensive operation inside mapper. could you please share the task log for this attempt attempt_1466342436828_0001_m_000008_2
Created ‎06-19-2016 02:23 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Sounds more like your map task is not very efficient. What are you doing in it? The second thing I could see is if the sort memory is too small. But I would mostly look at you map code.
http://stackoverflow.com/questions/5839359/java-lang-outofmemoryerror-gc-overhead-limit-exceeded
Created ‎06-19-2016 02:39 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Benjamin Leonhardi I am trying to read a CSV file of total size around 50 GB. Around 310 splits get created but I have only 3 maps in running status at a time eventhough I have four datanodes. Each of the datanode has 16 GB RAM and one disk and Cores (CPU):2 (2) . I am using CSVNLineInputFormat from (https://github.com/mvallebr/CSVInputFormat/blob/master/src/main/java/org/apache/hadoop/mapreduce/lib/input/CSVNLineInputFormat.java) to red my CSV files.
