Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Yarn application history server crashes

Yarn application history server crashes

Explorer

Apparently we're hitting the Application History server fairly often and lately this has caused it to crash

2017-03-12 14:04:38,198 ERROR mortbay.log (Slf4jLog.java:warn(87)) - Error for /applicationhistory

java.lang.OutOfMemoryError: GC overhead limit exceeded

2017-03-12 14:04:38,198 FATAL yarn.YarnUncaughtExceptionHandler (YarnUncaughtExceptionHandler.java:uncaughtException(51)) - Thread Thread[timeline,5,main] threw an Error. Shutting down now...

java.lang.OutOfMemoryError: GC overhead limit exceeded

2017-03-12 14:04:38,198 INFO applicationhistoryservice.FileSystemApplicationHistoryStore (FileSystemApplicationHistoryStore.java:getApplication(189)) - Completed reading history information of application application_1478290235897_0046

2017-03-12 14:04:38,201 INFO util.ExitUtil (ExitUtil.java:halt(147)) - Halt with status -1 Message: HaltException

1 REPLY 1
Highlighted

Re: Yarn application history server crashes

@Mark Cohen

History server heap size, not enough showing error --> java.lang.OutOfMemoryError: GC overhead limit exceeded

RESOLUTION:

The History server's heap can be managed directly via Ambari for Ambari managed clusters or manually for Non-ambari managed clusters. Follow the steps below for Ambari managed cluster:

1. Log into Ambari

2. Click on MapReduce2 component

3. Click on Configs tab and then again in Advanced tab

4. Increase History Server heap size to 8096.

Follow the steps below for Non-Ambari managed cluster:

1. Edit mapred-env.sh file located in /etc/hadoop/conf directory

2. Change export HADOOP_JOB_HISTORYSERVER_HEAPSIZE=8096

3. Restart MapReduce

Hope this helps you.

Don't have an account?
Coming from Hortonworks? Activate your account here