Support Questions

Find answers, ask questions, and share your expertise

org.apache.hadoop.mapred.YarnChild: Error running child : java.lang.OutOfMemoryError: GC overhead limit exceeded

avatar
Contributor

2017-03-30 14:12:34,329 FATAL [main] org.apache.hadoop.mapred.YarnChild: Error running child : java.lang.OutOfMemoryError: GC overhead limit exceeded at org.apache.xerces.dom.DeferredDocumentImpl.getNodeObject(Unknown Source) at org.apache.xerces.dom.DeferredDocumentImpl.synchronizeChildren(Unknown Source) at org.apache.xerces.dom.DeferredElementImpl.synchronizeChildren(Unknown Source) at org.apache.xerces.dom.ElementImpl.normalize(Unknown Source) at org.apache.xerces.dom.ElementImpl.normalize(Unknown Source) at com.mbrdi.xdl.powertrain.MR_XDLogFileAnalysis_ProcessingLogFiles_mapper.map(MR_XDLogFileAnalysis_ProcessingLogFiles_mapper.java:249) at com.mbrdi.xdl.powertrain.MR_XDLogFileAnalysis_ProcessingLogFiles_mapper.map(MR_XDLogFileAnalysis_ProcessingLogFiles_mapper.java:46) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)

I was getting this issue in one of the cluster while trying to read an xml file in a mapreduce. So, i used below properties in driver code.

config.set("mapreduce.map.memory.mb","10240"); config.set("mapreduce.map.java.opts","-Xmx8192m"); config.set("mapreduce.reduce.memory.mb","10240"); config.set("mapreduce.reduce.java.opts","-Xmx8192m"); config.set("mapreduce.task.io.sort.mb","1792"); config.set("yarn.scheduler.minimum-allocation-mb","10240"); config.set("yarn.scheduler.maximum-allocation-mb","184320"); config.set("yarn.nodemanager.resource.memory-mb","184320"); config.set("yarn.app.mapreduce.am.resource.mb","10240"); config.set("yarn.app.mapreduce.am.command-opts","-Xmx8192m");

and the code was working

In another cluster, i am not able to fix the error. Is there any property I am missing?

2 REPLIES 2

avatar
Guru

@pooja khandelwal, what is mapred.child.java.opts property set to ? Can you please try increasing this value ?

avatar
Contributor

mapred.child.java.opts seems to be depricated. Below are the values from cluster and the one used in driver code.

In Code :

=======================

config.set("mapreduce.map.java.opts","-Xmx8192m")

config.set("mapreduce.reduce.java.opts","-Xmx8192m");

In Cluster :

==================================

<property> <name>mapreduce.reduce.java.opts</name> <value>-Xmx26214m</value> </property>

<property> <name>mapreduce.map.java.opts</name> <value>-Xmx13107m</value> </property>