Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Log aggregation for Long running Spark Streaming jobs

avatar
Rising Star

The documentation for YARN log aggregation says that logs are aggregated after an application completes.

 

Does this rule out the applicability of YARN log aggregation for Spark streaming jobs because in theory streaming jobs run for a much longer duration and potentially don't ever terminate. I want to get the Spark Streaming jobs into HDFS before the job completes; Since Streaming jobs runs forever. Is there a good way to get Spark log data into HDFS?

 

Suri

1 ACCEPTED SOLUTION

avatar
Rising Star
hide-solution

This problem has been solved!

Want to get a detailed solution you have to login/registered on the community

Register/Login
2 REPLIES 2

avatar
Rising Star
hide-solution

This problem has been solved!

Want to get a detailed solution you have to login/registered on the community

Register/Login

avatar
New Contributor

Hi,

 

We are running Spark Streaming job on cluster managed by CM 6. After the spark streaming job has been run for like 4-5 days, the Spark UI for that particular job does not open. It says, logs like this in my nohup driver output file.

servlet.ServletHandler: Error for /streaming/
java.lang.OutOfMemoryError: Java heap space

 

These logs are logged many times in a continuous series.

 

But my job keeps on running fine. Its just that I am not able to open up the UI by clicking the Application Master link when I open the job from YARN Running Applications UI.