Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Log managmement for Long-running Spark Streaming Jobs on YARN Cluster

Solved Go to solution

Log managmement for Long-running Spark Streaming Jobs on YARN Cluster

Contributor

Hi, We need to find a way to maintain and search logs for the Long running Sprk streaming jobs on YARN. We have Log aggregation disabled in our cluster. We are thinking about Solr/Elastic search and may be Flume or Kafka to read the Sprk job logs.

 

any suggestions on how to implement search the on these logs and easily manage them?

 

 

Thanks,

Suri

1 ACCEPTED SOLUTION

Accepted Solutions

Re: Log managmement for Long-running Spark Streaming Jobs on YARN Cluster

Champion
Sorry wrong setting.

yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds
13 REPLIES 13

Re: Log managmement for Long-running Spark Streaming Jobs on YARN Cluster

Champion
I am assuming that log aggregation was turned off as it doesn't trigger until a job completes which is useless for long running/streaming jobs. I recommend turning it back on and using yarn.log-aggregation.retain-check-interval-seconds to have the logs collected up on a regular basis.

Solr/ES is really good for the counters/metrics and could be used for the logs as well.

Re: Log managmement for Long-running Spark Streaming Jobs on YARN Cluster

Contributor

@mbigelow You are right. We turned it off because of the long runnig jobs. 

 

Do you know any other ways to implement log serach other than Solr/elastic?

 

Suri

Re: Log managmement for Long-running Spark Streaming Jobs on YARN Cluster

Champion
What are you trying to achieve exactly? Do you just want to be able to search through the logs for key phrases? Do you want to all basic users to search the raw logs? Are you trying to hunt down problematic jobs?

Re: Log managmement for Long-running Spark Streaming Jobs on YARN Cluster

Contributor

We eant to searh for key phrases and at the same time we want developers to look in to the raw logs too for their troubleshooting and alerts for specific errors.

Re: Log managmement for Long-running Spark Streaming Jobs on YARN Cluster

Champion
You could do this in many ways. You could just load it in Solr/ES and go to town. Hive would not be a great fit but I could see some tables being build around specific data like job counters or metrics. MR jobs could be build to pull out specific data (possible to load into a Hive table) or Spark jobs (and the Spark shell can be used to explore there raw data). And simple tools like grep, awk, etc. can be used as the individual logs, when aggregated, are available to the user.

If you have CM, the YARN application screen for a cluster, I'm pretty sure, is built using an embedded Solr and gives you and idea of what could be done. This is more around metrics and job counters again.
Highlighted

Re: Log managmement for Long-running Spark Streaming Jobs on YARN Cluster

Contributor

The documentation for YARN log aggregation says that logs are aggregated after an application completes.

Streaming jobs run for a much longer duration and potentially don't ever terminate. I want to get the logs into HDFS for my streaming jobs before the application completes or terminates. What are the better ways to do it, since Log aggregation only do it after the jobs are completed.

 

Suri

Re: Log managmement for Long-running Spark Streaming Jobs on YARN Cluster

Champion
This got lost in my earlier reply...

yarn.log-aggregation.retain-check-interval-seconds

This determine when it checks if logs need to be aggregated. By default it is 0 which means it doesn't check and a job must finish. This will allow it to collect the logs for jobs that, in theory, won't end.

Re: Log managmement for Long-running Spark Streaming Jobs on YARN Cluster

Contributor

Thanks, @mbigelow.

 

So, if I set yarn.log-aggregation.retain-check-interval-seconds to 60 Seconds, It will send the logs to HDFS (every 60 seconds) even when the job was not finished? (Since streaming jobs run forever)

 

Suri

 

Re: Log managmement for Long-running Spark Streaming Jobs on YARN Cluster

Champion
yes, that is pretty frequent though so I don't know how it will go. I'd be interested to know.
Don't have an account?
Coming from Hortonworks? Activate your account here