Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Log managmement for Long-running Spark Streaming Jobs on YARN Cluster

avatar
Rising Star

Hi, We need to find a way to maintain and search logs for the Long running Sprk streaming jobs on YARN. We have Log aggregation disabled in our cluster. We are thinking about Solr/Elastic search and may be Flume or Kafka to read the Sprk job logs.

 

any suggestions on how to implement search the on these logs and easily manage them?

 

 

Thanks,

Suri

2 ACCEPTED SOLUTIONS

avatar
Champion
Sorry wrong setting.

yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds

View solution in original post

avatar
New Contributor

It's true that you can aggreate logs to hdfs when the job is still running, however, the minimun log uploading interval (yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds) you can set is 3600 seconds which is 1 hour. The design is trying to protect namenode from being spamed.

 

You may have to use an external service to do the log aggregation. Either write your own or find other tools.

 

Below is the proof from yarn-default.xml in hadoop-common source code (cdh5-2.6.0_5.7.1). 

 

<property>
<description>Defines how often NMs wake up to upload log files.
The default value is -1. By default, the logs will be uploaded when
the application is finished. By setting this configure, logs can be uploaded
periodically when the application is running. The minimum rolling-interval-seconds
can be set is 3600.
</description>
<name>yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds</name>
<value>-1</value>
</property>

 

View solution in original post

13 REPLIES 13

avatar
Champion
I am assuming that log aggregation was turned off as it doesn't trigger until a job completes which is useless for long running/streaming jobs. I recommend turning it back on and using yarn.log-aggregation.retain-check-interval-seconds to have the logs collected up on a regular basis.

Solr/ES is really good for the counters/metrics and could be used for the logs as well.

avatar
Rising Star

@mbigelow You are right. We turned it off because of the long runnig jobs. 

 

Do you know any other ways to implement log serach other than Solr/elastic?

 

Suri

avatar
Champion
What are you trying to achieve exactly? Do you just want to be able to search through the logs for key phrases? Do you want to all basic users to search the raw logs? Are you trying to hunt down problematic jobs?

avatar
Rising Star

We eant to searh for key phrases and at the same time we want developers to look in to the raw logs too for their troubleshooting and alerts for specific errors.

avatar
Champion
You could do this in many ways. You could just load it in Solr/ES and go to town. Hive would not be a great fit but I could see some tables being build around specific data like job counters or metrics. MR jobs could be build to pull out specific data (possible to load into a Hive table) or Spark jobs (and the Spark shell can be used to explore there raw data). And simple tools like grep, awk, etc. can be used as the individual logs, when aggregated, are available to the user.

If you have CM, the YARN application screen for a cluster, I'm pretty sure, is built using an embedded Solr and gives you and idea of what could be done. This is more around metrics and job counters again.

avatar
Rising Star

The documentation for YARN log aggregation says that logs are aggregated after an application completes.

Streaming jobs run for a much longer duration and potentially don't ever terminate. I want to get the logs into HDFS for my streaming jobs before the application completes or terminates. What are the better ways to do it, since Log aggregation only do it after the jobs are completed.

 

Suri

avatar
Champion
This got lost in my earlier reply...

yarn.log-aggregation.retain-check-interval-seconds

This determine when it checks if logs need to be aggregated. By default it is 0 which means it doesn't check and a job must finish. This will allow it to collect the logs for jobs that, in theory, won't end.

avatar
Rising Star

Thanks, @mbigelow.

 

So, if I set yarn.log-aggregation.retain-check-interval-seconds to 60 Seconds, It will send the logs to HDFS (every 60 seconds) even when the job was not finished? (Since streaming jobs run forever)

 

Suri

 

avatar
Champion
yes, that is pretty frequent though so I don't know how it will go. I'd be interested to know.