Support Questions

Find answers, ask questions, and share your expertise

Log managmement for Long-running Spark Streaming Jobs on YARN Cluster

avatar
Rising Star

Hi, We need to find a way to maintain and search logs for the Long running Sprk streaming jobs on YARN. We have Log aggregation disabled in our cluster. We are thinking about Solr/Elastic search and may be Flume or Kafka to read the Sprk job logs.

 

any suggestions on how to implement search the on these logs and easily manage them?

 

 

Thanks,

Suri

2 ACCEPTED SOLUTIONS

avatar
Champion
Sorry wrong setting.

yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds

View solution in original post

avatar
New Contributor

It's true that you can aggreate logs to hdfs when the job is still running, however, the minimun log uploading interval (yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds) you can set is 3600 seconds which is 1 hour. The design is trying to protect namenode from being spamed.

 

You may have to use an external service to do the log aggregation. Either write your own or find other tools.

 

Below is the proof from yarn-default.xml in hadoop-common source code (cdh5-2.6.0_5.7.1). 

 

<property>
<description>Defines how often NMs wake up to upload log files.
The default value is -1. By default, the logs will be uploaded when
the application is finished. By setting this configure, logs can be uploaded
periodically when the application is running. The minimum rolling-interval-seconds
can be set is 3600.
</description>
<name>yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds</name>
<value>-1</value>
</property>

 

View solution in original post

13 REPLIES 13

avatar
Rising Star

@mbigelow but from some other sources they said "set the yarn.log-aggregation.retain-check-interval-seconds to specify how often the log retention check should be run. By default, it is one-tenth of the log retention time" - What I understood from this was, it will only check for the retenstion and may not aggregate the logs based on that interval. Did I understood it correct?

 

Suri

avatar
Champion
Sorry wrong setting.

yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds

avatar
Rising Star
Thank you, I Will try it out.

avatar
New Contributor

It's true that you can aggreate logs to hdfs when the job is still running, however, the minimun log uploading interval (yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds) you can set is 3600 seconds which is 1 hour. The design is trying to protect namenode from being spamed.

 

You may have to use an external service to do the log aggregation. Either write your own or find other tools.

 

Below is the proof from yarn-default.xml in hadoop-common source code (cdh5-2.6.0_5.7.1). 

 

<property>
<description>Defines how often NMs wake up to upload log files.
The default value is -1. By default, the logs will be uploaded when
the application is finished. By setting this configure, logs can be uploaded
periodically when the application is running. The minimum rolling-interval-seconds
can be set is 3600.
</description>
<name>yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds</name>
<value>-1</value>
</property>