Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

MapReduce application failed with OutOfMemoryError

avatar
Contributor

 

Hello! since my cluster started to run more Spark and MapReduce jobs than before a high resource consuming MAPREDUCE application crushed randomly.
The error is always the same (OOME).
My cluster nodes have 128GB of RAM and 32 cores each, Kerberos and HA for HDFS and Yarn are enabled (we use Yarn instead of MapReduceV1).

App details
- Total maps: 18922
- Total reducers: 983
- Aprox. Allocated CPU vCores: 283
- Aprox. Allocated Memory MB: 3467264

Below I pasted:

- The log from my History Server.
- The properties related with memory configuration from my yarn-site.xml, mapred-site.xml and hadoop-env.sh.
- JAVA per node info.

-----------------------------------------------------------------------------------------------------

History Server log (from one YARN Node Manager where the app failed)

Log Type: stderr
Log Upload Time: Sun Jun 11 13:49:24 +0000 2017
Log Length: 25992448
Showing 4096 bytes of 25992448 total. Click here for the full log.
v2.runtime.XMLSerializer.leafElement(XMLSerializer.java:327)
at com.sun.xml.bind.v2.model.impl.RuntimeBuiltinLeafInfoImpl$StringImplImpl.writeLeafElement(RuntimeBuiltinLeafInfoImpl.java:1045)
at com.sun.xml.bind.v2.model.impl.RuntimeBuiltinLeafInfoImpl$StringImplImpl.writeLeafElement(RuntimeBuiltinLeafInfoImpl.java:1024)
at com.sun.xml.bind.v2.model.impl.RuntimeEnumLeafInfoImpl.writeLeafElement(RuntimeEnumLeafInfoImpl.java:169)
at com.sun.xml.bind.v2.model.impl.RuntimeEnumLeafInfoImpl.writeLeafElement(RuntimeEnumLeafInfoImpl.java:69)
at com.sun.xml.bind.v2.runtime.reflect.TransducedAccessor$CompositeTransducedAccessorImpl.writeLeafElement(TransducedAccessor.java:256)
at com.sun.xml.bind.v2.runtime.property.SingleElementLeafProperty.serializeBody(SingleElementLeafProperty.java:128)
at com.sun.xml.bind.v2.runtime.ClassBeanInfoImpl.serializeBody(ClassBeanInfoImpl.java:344)
at com.sun.xml.bind.v2.runtime.XMLSerializer.childAsXsiType(XMLSerializer.java:700)
at com.sun.xml.bind.v2.runtime.property.ArrayElementNodeProperty.serializeItem(ArrayElementNodeProperty.java:69)
at com.sun.xml.bind.v2.runtime.property.ArrayElementProperty.serializeListBody(ArrayElementProperty.java:172)
at com.sun.xml.bind.v2.runtime.property.ArrayERProperty.serializeBody(ArrayERProperty.java:159)
at com.sun.xml.bind.v2.runtime.ClassBeanInfoImpl.serializeBody(ClassBeanInfoImpl.java:344)
at com.sun.xml.bind.v2.runtime.XMLSerializer.childAsSoleContent(XMLSerializer.java:597)
at com.sun.xml.bind.v2.runtime.ClassBeanInfoImpl.serializeRoot(ClassBeanInfoImpl.java:328)
at com.sun.xml.bind.v2.runtime.XMLSerializer.childAsRoot(XMLSerializer.java:498)
at com.sun.xml.bind.v2.runtime.MarshallerImpl.write(MarshallerImpl.java:320)
... 39 more
Caused by: org.mortbay.jetty.EofException
at org.mortbay.jetty.HttpGenerator.flush(HttpGenerator.java:791)
at org.mortbay.jetty.AbstractGenerator$Output.flush(AbstractGenerator.java:569)
at org.mortbay.jetty.HttpConnection$Output.flush(HttpConnection.java:1012)
at org.mortbay.jetty.AbstractGenerator$Output.write(AbstractGenerator.java:651)
at org.mortbay.jetty.AbstractGenerator$Output.write(AbstractGenerator.java:580)
at com.sun.jersey.spi.container.servlet.WebComponent$Writer.write(WebComponent.java:307)
at java.util.zip.DeflaterOutputStream.deflate(DeflaterOutputStream.java:253)
at java.util.zip.DeflaterOutputStream.write(DeflaterOutputStream.java:211)
at java.util.zip.GZIPOutputStream.write(GZIPOutputStream.java:146)
at com.sun.jersey.spi.container.ContainerResponse$CommittingOutputStream.write(ContainerResponse.java:134)
at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:282)
at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125)
at java.io.OutputStreamWriter.write(OutputStreamWriter.java:207)
at org.codehaus.jackson.impl.WriterBasedGenerator._flushBuffer(WriterBasedGenerator.java:1812)
at org.codehaus.jackson.impl.WriterBasedGenerator._writeString(WriterBasedGenerator.java:987)
at org.codehaus.jackson.impl.WriterBasedGenerator.writeString(WriterBasedGenerator.java:448)
at com.sun.jersey.json.impl.writer.JacksonStringMergingGenerator.flushPreviousString(JacksonStringMergingGenerator.java:311)
at com.sun.jersey.json.impl.writer.JacksonStringMergingGenerator.writeFieldName(JacksonStringMergingGenerator.java:139)
at com.sun.jersey.json.impl.writer.Stax2JacksonWriter.writeStartElement(Stax2JacksonWriter.java:183)
... 58 more
Caused by: java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
at sun.nio.ch.IOUtil.write(IOUtil.java:65)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:492)
at org.mortbay.io.nio.ChannelEndPoint.flush(ChannelEndPoint.java:171)
at org.mortbay.io.nio.SelectChannelEndPoint.flush(SelectChannelEndPoint.java:221)
at org.mortbay.jetty.HttpGenerator.flush(HttpGenerator.java:725)
... 77 more
Halting due to Out Of Memory Error...
Log Type: stdout
Log Upload Time: Sun Jun 11 13:49:24 +0000 2017
Log Length: 0
Log Type: syslog
Log Upload Time: Sun Jun 11 13:49:24 +0000 2017
Log Length: 24905123
Showing 4096 bytes of 24905123 total. Click here for the full log.
che.hadoop.mapred.TaskUmbilicalProtocol
2017-06-11 07:00:14,078 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for job_1496754929367_0270 (auth:TOKEN) for protocol=interface org.apache.hadoop.mapred.TaskUmbilicalProtocol
2017-06-11 07:00:14,079 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for job_1496754929367_0270 (auth:TOKEN) for protocol=interface org.apache.hadoop.mapred.TaskUmbilicalProtocol
2017-06-11 07:00:14,080 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for job_1496754929367_0270 (auth:TOKEN) for protocol=interface org.apache.hadoop.mapred.TaskUmbilicalProtocol
2017-06-11 07:00:14,080 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for job_1496754929367_0270 (auth:TOKEN) for protocol=interface org.apache.hadoop.mapred.TaskUmbilicalProtocol
2017-06-11 07:00:14,081 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for job_1496754929367_0270 (auth:SIMPLE)
2017-06-11 07:00:14,081 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for job_1496754929367_0270 (auth:TOKEN) for protocol=interface org.apache.hadoop.mapred.TaskUmbilicalProtocol
2017-06-11 07:00:14,082 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for job_1496754929367_0270 (auth:TOKEN) for protocol=interface org.apache.hadoop.mapred.TaskUmbilicalProtocol
2017-06-11 07:00:14,082 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for job_1496754929367_0270 (auth:TOKEN) for protocol=interface org.apache.hadoop.mapred.TaskUmbilicalProtocol
2017-06-11 07:00:14,082 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for job_1496754929367_0270 (auth:SIMPLE)
2017-06-11 07:00:14,083 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for job_1496754929367_0270 (auth:TOKEN) for protocol=interface org.apache.hadoop.mapred.TaskUmbilicalProtocol
2017-06-11 07:00:14,083 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for job_1496754929367_0270 (auth:TOKEN) for protocol=interface org.apache.hadoop.mapred.TaskUmbilicalProtocol
2017-06-11 07:00:14,974 FATAL org.apache.hadoop.yarn.YarnUncaughtExceptionHandler: Thread Thread[Socket Reader #1 for port 36503,5,main] threw an Error. Shutting down now...
java.lang.OutOfMemoryError: GC overhead limit exceeded
at com.google.protobuf.CodedInputStream.<init>(CodedInputStream.java:573)
at com.google.protobuf.CodedInputStream.newInstance(CodedInputStream.java:55)
at com.google.protobuf.AbstractMessageLite$Builder.mergeFrom(AbstractMessageLite.java:219)
at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:912)
at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:267)
at com.google.protobuf.AbstractMessageLite$Builder.mergeDelimitedFrom(AbstractMessageLite.java:290)
at com.google.protobuf.AbstractMessage$Builder.mergeDelimitedFrom(AbstractMessage.java:926)
at com.google.protobuf.AbstractMessageLite$Builder.mergeDelimitedFrom(AbstractMessageLite.java:296)
at com.google.protobuf.AbstractMessage$Builder.mergeDelimitedFrom(AbstractMessage.java:918)
at org.apache.hadoop.ipc.Server$Connection.decodeProtobufFromStream(Server.java:1994)
at org.apache.hadoop.ipc.Server$Connection.processOneRpc(Server.java:1774)
at org.apache.hadoop.ipc.Server$Connection.readAndProcess(Server.java:1548)
at org.apache.hadoop.ipc.Server$Listener.doRead(Server.java:774)
at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:647)
at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:618)
2017-06-11 07:00:14,980 INFO org.apache.hadoop.util.ExitUtil: Halt with status -1 Message: HaltException

-----------------------------------------------------------------------------------------------------

yarn-site.xml

 

<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>122635</value>
</property>
<property>
<name>yarn.nodemanager.resource.cpu-vcores</name>
<value>30</value>
</property>
<property>
<name>yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage</name>
<value>100.0</value>
</property>
<property>
<name>yarn.scheduler.maximum-allocation-mb</name>
<value>102400</value>
</property>
<property>
<name>mapreduce.map.memory.mb</name>
<value>6144</value>
</property>
<property>
<name>mapreduce.reduce.memory.mb</name>
<value>12288</value>
</property>
<property>
<name>local.child.java.opts</name>
<value>-server -Djava.net.preferIPv4Stack=true</value>
</property>
<property>
<name>local.map.task.jvm.heap.mb</name>
<value>4300</value>
</property>
<property>
<name>local.reduce.task.jvm.heap.mb</name>
<value>8601</value>
</property>
<property>
<name>mapreduce.map.java.opts</name>
<value>${local.child.java.opts} -Xmx${local.map.task.jvm.heap.mb}m</value>
</property>
<property>
<name>mapreduce.reduce.java.opts</name>
<value>${local.child.java.opts} -Xmx${local.reduce.task.jvm.heap.mb}m</value>
</property>

-----------------------------------------------------------------------------------------------------

mapred-site.xml

 

<property>
<name>local.child.java.opts</name>
<value>-server -Djava.net.preferIPv4Stack=true</value>
</property>
<property>
<name>local.map.task.jvm.heap.mb</name>
<value>4300</value>
</property>
<property>
<name>local.reduce.task.jvm.heap.mb</name>
<value>8601</value>
</property>
<property>
<name>mapred.map.child.java.opts</name>
<value>${local.child.java.opts} -Xmx${local.map.task.jvm.heap.mb}m</value>
</property>
<property>
<name>mapred.reduce.child.java.opts</name>
<value>${local.child.java.opts} -Xmx${local.reduce.task.jvm.heap.mb}m</value>
</property>
<property>
<name>mapred.job.map.memory.mb</name>
<value>6144</value>
</property>
<property>
<name>mapred.job.reduce.memory.mb</name>
<value>12288</value>
</property>
<property>
<name>mapred.cluster.max.map.memory.mb</name>
<value>15360</value>
</property>
<property>
<name>mapred.cluster.max.reduce.memory.mb</name>
<value>15360</value>
</property>
<property>
<name>mapreduce.job.reduce.slowstart.completedmaps</name>
<value>0.999</value>
</property>

 

hadoop-env.sh

# Generic command-specific options
add_opt HADOOP_NAMENODE_OPTS "-XX:+UseParNewGC"
add_opt HADOOP_NAMENODE_OPTS "-XX:+UseConcMarkSweepGC"
add_opt HADOOP_NAMENODE_OPTS "-Xmx8192m" #"
add_opt HADOOP_NAMENODE_OPTS "-Dsyslog.tag=namenode"
add_opt HADOOP_NAMENODE_OPTS "-Ddaemon.logger=INFO,syslog"
add_opt HADOOP_NAMENODE_OPTS "${JMX_OPTS}=7191"
add_opt HADOOP_DATANODE_OPTS "-XX:+UseParNewGC"
add_opt HADOOP_DATANODE_OPTS "-XX:+UseConcMarkSweepGC"
add_opt HADOOP_DATANODE_OPTS "-Xmx4096m" #"
add_opt HADOOP_DATANODE_OPTS "-Dsyslog.tag=datanode"
add_opt HADOOP_DATANODE_OPTS "-Ddaemon.logger=INFO,syslog"
add_opt HADOOP_DATANODE_OPTS "${JMX_OPTS}=7194"
add_opt HADOOP_JOURNALNODE_OPTS "-XX:+UseParNewGC"
add_opt HADOOP_JOURNALNODE_OPTS "-XX:+UseConcMarkSweepGC"
add_opt HADOOP_JOURNALNODE_OPTS "-Xmx1024m" #"
add_opt HADOOP_JOURNALNODE_OPTS "-Dsyslog.tag=journalnode"
add_opt HADOOP_JOURNALNODE_OPTS "-Ddaemon.logger=INFO,syslog"
add_opt HADOOP_JOURNALNODE_OPTS "${JMX_OPTS}=7203"
add_opt HADOOP_JOB_HISTORYSERVER_OPTS "-XX:+UseParNewGC"
add_opt HADOOP_JOB_HISTORYSERVER_OPTS "-XX:+UseConcMarkSweepGC"
add_opt HADOOP_JOB_HISTORYSERVER_OPTS "-Xmx1024m" #"
add_opt HADOOP_JOB_HISTORYSERVER_OPTS "-Dsyslog.tag=historyserver"
add_opt HADOOP_JOB_HISTORYSERVER_OPTS "-Ddaemon.logger=INFO,syslog"
add_opt HADOOP_JOB_HISTORYSERVER_OPTS "${JMX_OPTS}=7201"

add_opt YARN_RESOURCEMANAGER_OPTS "-XX:+UseParNewGC"
add_opt YARN_RESOURCEMANAGER_OPTS "-XX:+UseConcMarkSweepGC"
add_opt YARN_RESOURCEMANAGER_OPTS "-Xmx1024m" #"
add_opt YARN_RESOURCEMANAGER_OPTS "-Dsyslog.tag=resourcemanager"
add_opt YARN_RESOURCEMANAGER_OPTS "-Ddaemon.logger=INFO,syslog"
add_opt YARN_RESOURCEMANAGER_OPTS "${JMX_OPTS}=7204"
add_opt YARN_PROXYSERVER_OPTS "-XX:+UseParNewGC"
add_opt YARN_PROXYSERVER_OPTS "-XX:+UseConcMarkSweepGC"
add_opt YARN_PROXYSERVER_OPTS "-Xmx1024m" #"
add_opt YARN_PROXYSERVER_OPTS "-Dsyslog.tag=proxyserver"
add_opt YARN_PROXYSERVER_OPTS "-Ddaemon.logger=INFO,syslog"
add_opt YARN_PROXYSERVER_OPTS "${JMX_OPTS}=7202"
add_opt YARN_NODEMANAGER_OPTS "-XX:+UseParNewGC"
add_opt YARN_NODEMANAGER_OPTS "-XX:+UseConcMarkSweepGC"
add_opt YARN_NODEMANAGER_OPTS "-Xmx1024m" #"
add_opt YARN_NODEMANAGER_OPTS "-Dsyslog.tag=nodemanager"
add_opt YARN_NODEMANAGER_OPTS "-Ddaemon.logger=INFO,syslog"
add_opt YARN_NODEMANAGER_OPTS "${JMX_OPTS}=7205"
# Specific command-specific options
add_opt HADOOP_NAMENODE_OPTS "-Dhdfs.audit.logger=INFO,RFAAUDIT"
add_opt HADOOP_JOBTRACKER_OPTS "-Dmapred.audit.logger=INFO,MRAUDIT"
add_opt HADOOP_JOBTRACKER_OPTS "-Dmapred.jobsummary.logger=INFO,JSA"
add_opt HADOOP_TASKTRACKER_OPTS "-Dsecurity.audit.logger=ERROR,console"
add_opt HADOOP_TASKTRACKER_OPTS "-Dmapred.audit.logger=ERROR,console"
add_opt HADOOP_SECONDARYNAMENODE_OPTS "-Dhdfs.audit.logger=INFO,RFAAUDIT"

IMPORTANT: There's a file used for the java mapred application where the following properties are set:
"mapreduce.reduce.slowstart.completed.maps" 0.95,
"mapreduce.job.reduces" 983,
"mapreduce.reduce.shuffle.input.buffer.percent" 0.5,

-----------------------------------------------------------------------------------------------------

Some JAVA settings:

VM Memory Heap results from "$ java -XshowSettings:all"

VM settings:
Max. Heap Size (Estimated): 26.52G
Ergonomics Machine Class: server
Using VM: OpenJDK 64-Bit Server VM

The java processes details from one node randomly selected, at the moment this node is running Reduce tasks for the same app that failed before.

Number of processes = 12
Memory usage per process = 5493.24 MB
Total memory usage = 65918.9 MB

From running ps -aux | grep app_id I've got: ........-Xmx8601m.......

-----------------------------------------------------------------------------------------------------

If you need more details please let me know.
Thanks!

Guido.

7 REPLIES 7

avatar
Champion
In the yarn-site.xml you have the mapreduce... settings but in the mapred-site.xml you have the mapred... settings. The settings for MRv2 containers need to be in the mapred-site.xml and they should use the MRv2 API which has the settings starting with mapreduce...

The error "GC overhead limit exceeded" indicates that it spent too much time in GC while freeing up too little of the heap. You could increase the container and heap values. You can also try adding the '-XX:-UseGCOverheadLimit' option to the MR container java opts.

avatar
Contributor

Thanks @mbigelow!

I hope that could solve the problem.

I'll let you know.

 

Thanks again.

 

Guido.

avatar
Contributor

Hi @mbigelow

 

After puting the properties from mapreduceV2 into mapred-site.xml and deleting them from yarn-site.xml the jobs started to run slower and slower and can only get 1 or 2 cores and few MB of RAM. Jobs before the changes used about 300 cores and 3811328 mb of memory.

 

It looks like the node manager was reading these properties from yarn-site.xml. I know that they are in the wrong place.. but could be there any strange configuration that makes YARN behave like that?

 

That's not make sense but maybe you've already seen this before.

 

Thanks in advance!

 

Guido.

 

 

 

avatar
Master Collaborator

is it fail at the Map phase or the reducer? MR jobs running with 6 GB for map and 12 G for reducer and fail, need a review.

 

How much data the MR is running on? how many MR jobs do you have? 18K mappers also indicate that you have a lot of small files in your HDFS.

 

your cluster consumption of ~300Vcores and 3T memory is meaning that each Vcore is running with an average of 10 G memory, how much nodes you have? from your stat seems your jobs are memory intenstive rather than vcores which this can reflects mainly on Spark jobs and you can manage this to get the optimal Vcores and memory for each Spark job.

 

i suspect that you have a specific crazy job and prefer you if you are using Cloudera manager to search it in the application, or try to run the jobs under pools and then identify the problmatic ones.

avatar
Champion
I don't know of anything that would allow it to read the configuration files from the other file. The difference in mapred and mapreduce settings is the MR API. I think it is possible then that the MR app you have is using the older API and maybe that is why the mapred settings were working before.

You can check the configuration settings for each MR job through the RM UI. Use that to verify the exact settings used in each settings.

On the resource usage, the number of maps is determined by the input. Are you positive that the same amount of data and blocks was being used by the app in each run?

avatar
Contributor

@mbigelow thats wright! the app used an older API from MRv1.

The app uses almost the same amount of data each day plus aprox 30GB so the expected time could be a little bit more each day, as the app runs daily, but in the last days past from 9 hours to more than a day to finish.

The solution I've founded was to decrease the amount of memory instead of increase it.

Now the app can allocate more containers so the app is runing pretty fast.

Thanks for your help, just in case I'll let you know.

Guido.

 

avatar
Contributor

Thank You Fawze and sorry for my delayed answer. I was sick.

Is in both, the map and the reduce phase.

The MR job takes aprox. 3TB from HBase and adds about 30GB of data per day.

Cluster architecture:

Number of NodeManagers = 37

Resource Managers = 2 with YARN in HA

5 nodes with 22 cores and 66GB of RAM

32 nodes with 30 cores and 120GB of RAM

Total vcores = 1070

Total memory = 4.07 TB

 

I realized that decreasing the memory instead of increasing got better results.

Now with this params worked really well and could get all the cores.

 

"mapreduce.map.memory.mb" 1536
"mapreduce.reduce.memory.mb" 3072
"mapreduce.map.java.opts" -Xmx1024m
"mapreduce.reduce.java.opts" -Xmx2560m
Thanks!
 
Guido.