Support Questions

Find answers, ask questions, and share your expertise

Mapreduce - GC overhead limit exceeded

avatar
Explorer

Hi,

When we run a mapreduce job we re getting GC overhead limit exceeded error during the map phase and the job gets terminated. Please let us know how this can be resolved?

Error: GC overhead limit exceeded 16/06/19 17:34:39 INFO mapreduce.Job: map 18% reduce 0% 16/06/19 17:36:42 INFO mapreduce.Job: map 19% reduce 0% 16/06/19 17:37:18 INFO mapreduce.Job: Task Id : attempt_1466342436828_0001_m_000008_2, Status : FAILED Error: Java heap space

Regards,

Venkadesh S

1 ACCEPTED SOLUTION

avatar
Super Guru

looks your mapred.child.java.opts is insufficient to run the job,try running this job again after increasing mapred.child.java.opts value.

View solution in original post

5 REPLIES 5

avatar
Super Guru

looks your mapred.child.java.opts is insufficient to run the job,try running this job again after increasing mapred.child.java.opts value.

avatar
Explorer

Thanks ..@Rajkumar Singh .. @Benjamin Leonhardi

Below are my settings in the cluster.

Map Memory : 8192

Sort Allocation Memory : 2047

MR Map Java Heap Size : -Xmx8192mmapreduce.admin.map.child.java.opts & mapred.child.java.opts : -server -XX:NewRatio=8 -Djava.net.preferIPv4Stack=true -Dhdp.version=${hdp.version}

I havent found mapred.child.java.opts through Ambari.

avatar
Super Guru

normally mapper dont fail with OOM and 8192M is pretty good, I suspect that if you have some big records while reading from csv, are you doing some memory intensive operation inside mapper. could you please share the task log for this attempt attempt_1466342436828_0001_m_000008_2

avatar
Master Guru

Sounds more like your map task is not very efficient. What are you doing in it? The second thing I could see is if the sort memory is too small. But I would mostly look at you map code.

http://stackoverflow.com/questions/5839359/java-lang-outofmemoryerror-gc-overhead-limit-exceeded

avatar
Explorer

@Benjamin Leonhardi I am trying to read a CSV file of total size around 50 GB. Around 310 splits get created but I have only 3 maps in running status at a time eventhough I have four datanodes. Each of the datanode has 16 GB RAM and one disk and Cores (CPU):2 (2) . I am using CSVNLineInputFormat from (https://github.com/mvallebr/CSVInputFormat/blob/master/src/main/java/org/apache/hadoop/mapreduce/lib/input/CSVNLineInputFormat.java) to red my CSV files.