Reply
Explorer
Posts: 19
Registered: ‎09-22-2016

Re: Log managmement for Long-running Spark Streaming Jobs on YARN Cluster

@mbigelow but from some other sources they said "set the yarn.log-aggregation.retain-check-interval-seconds to specify how often the log retention check should be run. By default, it is one-tenth of the log retention time" - What I understood from this was, it will only check for the retenstion and may not aggregate the logs based on that interval. Did I understood it correct?

 

Suri

Posts: 642
Topics: 3
Kudos: 103
Solutions: 66
Registered: ‎08-16-2016

Re: Log managmement for Long-running Spark Streaming Jobs on YARN Cluster

Sorry wrong setting.

yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds
Explorer
Posts: 19
Registered: ‎09-22-2016

Re: Log managmement for Long-running Spark Streaming Jobs on YARN Cluster

Thank you, I Will try it out.
New Contributor
Posts: 1
Registered: ‎04-14-2017

Re: Log managmement for Long-running Spark Streaming Jobs on YARN Cluster

It's true that you can aggreate logs to hdfs when the job is still running, however, the minimun log uploading interval (yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds) you can set is 3600 seconds which is 1 hour. The design is trying to protect namenode from being spamed.

 

You may have to use an external service to do the log aggregation. Either write your own or find other tools.

 

Below is the proof from yarn-default.xml in hadoop-common source code (cdh5-2.6.0_5.7.1). 

 

<property>
<description>Defines how often NMs wake up to upload log files.
The default value is -1. By default, the logs will be uploaded when
the application is finished. By setting this configure, logs can be uploaded
periodically when the application is running. The minimum rolling-interval-seconds
can be set is 3600.
</description>
<name>yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds</name>
<value>-1</value>
</property>

 

Announcements