Created 11-24-2015 03:33 AM
Seeing the following error on HDP 2.3.0:
2015-10-19 07:33:03,353 ERROR mapred.ShuffleHandler (ShuffleHandler.java:exceptionCaught(1053)) - Shuffle error: java.lang.OutOfMemoryError: GC overhead limit exceeded at java.util.Arrays.copyOf(Arrays.java:2219) at java.util.ArrayList.grow(ArrayList.java:242) at java.util.ArrayList.ensureExplicitCapacity(ArrayList.java:216) at java.util.ArrayList.ensureCapacityInternal(ArrayList.java:208) at java.util.ArrayList.add(ArrayList.java:440) -- at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:459) at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:536) at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
2015-10-21 07:05:13,532 FATAL yarn.YarnUncaughtExceptionHandler (YarnUncaughtExceptionHandler.java:uncaughtException(51)) - Thread Thread[Container Monitor,5,main] threw an Error. Shutting down now... java.lang.OutOfMemoryError: GC overhead limit exceeded at java.io.BufferedReader.<init>(BufferedReader.java:98) at java.io.BufferedReader.<init>(BufferedReader.java:109) at org.apache.hadoop.yarn.util.ProcfsBasedProcessTree.constructProcessInfo(ProcfsBasedProcessTree.java:545) at org.apache.hadoop.yarn.util.ProcfsBasedProcessTree.updateProcessTree(ProcfsBasedProcessTree.java:225) at org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl$MonitoringThread.run(ContainersMonitorImpl.java:439)
Not only try increasing "mapreduce.reduce.memory.mb" but also can I add "-XX:-UseGCOverheadLimit" in "mapreduce.admin.reduce.child.java.opts"?
Also would it be a good idea to reduce "mapreduce.reduce.shuffle.input.buffer.percent" ?
Created 11-24-2015 11:10 AM
Created 11-24-2015 11:10 AM
Created 11-24-2015 11:15 AM
Whats the cluster config?
CPU, Memory
What is the current value of these parameters?
mapreduce.reduce.shuffle.input.buffer.percent
mapreduce.reduce.memory.mb
Created 11-24-2015 11:18 AM
12 CPUs, 64GB memory
mapreduce.reduce.shuffle.input.buffer.percent=0.7
mapreduce.reduce.memory.mb=2048
Created 11-24-2015 11:27 AM