- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
timeline server memory leak?
- Labels:
-
Apache YARN
Created 07-20-2016 09:15 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi
I use HDP 2.4.0, CentOS 6.7, jdk1.8.0_72
I suspect timeline server memory leak.
I filed jira(https://issues.apache.org/jira/browse/YARN-5368)
Because I use HDP, so I post here.
I set -Xmx1024m, but ps command shows 3.5GB of memory usage.
And then memory usage increases everyday.
I'm wondering. What is the reason? for example, LevelDB JNI memory leak?
If there is a good metrics for monitoring, could you tell me?
# ps aux | grep timelineserver yarn 6163 2.7 5.3 6630548 3545856 ? Sl Jul13 288:20 /usr/java/jdk1.8.0_72/bin/java -Dproc_timelineserver -Xmx1024m -Dhdp.version=2.4.0.0-169 -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-timelineserver-myhost.log -Dyarn.log.file=yarn-yarn-timelineserver-myhost.log -Dyarn.home.dir= -Dyarn.id.str=yarn -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.4.0.0-169/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.4.0.0-169/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.4.0.0-169/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.4.0.0-169/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Dyarn.policy.file=hadoop-policy.xml -Djava.io.tmpdir=/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-timelineserver-myhost.log -Dyarn.log.file=yarn-yarn-timelineserver-myhost.log -Dyarn.home.dir=/usr/hdp/current/hadoop-yarn-timelineserver -Dhadoop.home.dir=/usr/hdp/2.4.0.0-169/hadoop -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.4.0.0-169/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.4.0.0-169/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.4.0.0-169/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.4.0.0-169/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -classpath /usr/hdp/current/hadoop-client/conf:/usr/hdp/current/hadoop-client/conf:/usr/hdp/current/hadoop-client/conf:/usr/hdp/2.4.0.0-169/hadoop/lib/*:/usr/hdp/2.4.0.0-169/hadoop/.//*:/usr/hdp/2.4.0.0-169/hadoop-hdfs/./:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/*:/usr/hdp/2.4.0.0-169/hadoop-hdfs/.//*:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/*:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//*:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/*:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//*::/usr/hdp/2.4.0.0-169/tez/*:/usr/hdp/2.4.0.0-169/tez/lib/*:/usr/hdp/2.4.0.0-169/tez/conf:/usr/hdp/2.4.0.0-169/tez/*:/usr/hdp/2.4.0.0-169/tez/lib/*:/usr/hdp/2.4.0.0-169/tez/conf:/usr/hdp/current/hadoop-yarn-timelineserver/.//*:/usr/hdp/current/hadoop-yarn-timelineserver/lib/*:/usr/hdp/current/hadoop-client/conf/timelineserver-config/log4j.properties org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer
Created 06-09-2021 10:46 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi wyukawa
As you mentioned this is yarn bug - YARN-5368 and is fixed in the HDP-2.5.6 version.
You can try setting the below properties in your environment and check whether it helps you.
yarn.timeline-service.ttl-ms=604800000 yarn.timeline-service.rolling-period=daily yarn.timeline-service.leveldb-timeline-store.read-cache-size=4194304 yarn.timeline-service.leveldb-timeline-store.write-buffer-size=4194304 yarn.timeline-service.leveldb-timeline-store.max-open-files=500
NOTE: Kindly replace the values according to your need.
If the above suggested workaround doesn't help then i would suggest you to upgrade your environment to HDP-2.5.6 version or more. You can refer the below link to check the fixed issues:
https://docs.cloudera.com/HDPDocuments/HDP2/HDP-2.5.6/bk_release-notes/content/fixed_issues.html
Please "Accept as Solution" if my answer was helpful to you.
Thanks!
Created 05-27-2021 02:38 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Same problem for us today.
19G yesterday and 11G today after restart.
Is there a patch available ?
Created 06-09-2021 10:46 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi wyukawa
As you mentioned this is yarn bug - YARN-5368 and is fixed in the HDP-2.5.6 version.
You can try setting the below properties in your environment and check whether it helps you.
yarn.timeline-service.ttl-ms=604800000 yarn.timeline-service.rolling-period=daily yarn.timeline-service.leveldb-timeline-store.read-cache-size=4194304 yarn.timeline-service.leveldb-timeline-store.write-buffer-size=4194304 yarn.timeline-service.leveldb-timeline-store.max-open-files=500
NOTE: Kindly replace the values according to your need.
If the above suggested workaround doesn't help then i would suggest you to upgrade your environment to HDP-2.5.6 version or more. You can refer the below link to check the fixed issues:
https://docs.cloudera.com/HDPDocuments/HDP2/HDP-2.5.6/bk_release-notes/content/fixed_issues.html
Please "Accept as Solution" if my answer was helpful to you.
Thanks!
Created 06-13-2021 10:43 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @gael__urbauer , did @shobikas' solution work for you? Have you found a resolution for your issue? If so, can you please mark the appropriate reply as the solution? It will make it easier for others to find the answer in the future.
Regards,
Vidya Sargur,Community Manager
Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.
Learn more about the Cloudera Community:
