Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

spark history logs that .inprogress are not deleted

avatar

hi all

we have ambari - 2.6.1 and HDP - 2.6.4 version

we set the following:

  1. spark.history.fs.cleaner.enabled=true
  2. spark.history.fs.cleaner.interval=1d
  3. spark.history.fs.cleaner.maxAge=1d

but this configuration not deleted the old spark history .inprogress files

any suggestion how to force the cleanup?

Michael-Bronson
4 REPLIES 4

avatar
Rising Star

Hi @Michael Bronson,

Is it deleting everything else but the .inprogress files?

The following is already present and fixed on HDP 2.6.4:

https://issues.apache.org/jira/browse/SPARK-8617

Where one of the proposed changes was to use loading time for inprogress files as lastUpdated and keep using the modTime for completed files. First one will prevent deletion of inprogress job files. The second one will ensure that lastUpdated time won't change for completed jobs in an event of HistoryServer reboot.

- Can you double check the .inprogress files timestamp.

- Check they do not correspond to actual running applications (streaming apps for example)

- Check permission on these files, and perhaps try to manually delete one of these lingering .inprogress files logged in as the spark user and see if it lets you remove one of them.

- Restart the SHS and check the log to see if it prints any errors while trying to remove these .inprogress files. Similar error messages like:

case t: Exception => logError("Exception in cleaning logs", t)
logError(s"IOException in cleaning ${attempt.logPath}", t)
logInfo(s"No permission to delete ${attempt.logPath}, ignoring.")

Regards,

David

avatar
Explorer

Edit File spark-defaults.conf as follows.

spark.history.fs.cleaner.enabled true

spark.history.fs.cleaner.maxAge 12h [ Job history files older than this will be deleted when the file system history cleaner runs.]

spark.history.fs.cleaner.interval 1h [This dictates how often the file system job history cleaner checks for files to delete.]

Restart spark history server.

Setting these values during application run that is spark-submit via --conf has no effect. Either set them at cluster creation time via the EMR configuration API or manually edit the spark-defaults.conf, set these values and restart the spark history server. Also note that the logs will be cleaned up the next time your Spark app restarts. For example , if you have a long running Spark streaming job, it will not delete any logs for that application run and will keep accumulating logs. And when the next time the job restarts it will cleanup the older logs.

We can also set spark.eventLog.enabled to false.

avatar
Cloudera Employee

Hi,

 

We understand that all the required properties are enabled and you can change the interval and max-age depends on your requirement, Added could you tell us how old days files are not getting deleted? if it was too old, you may need to manually delete the .inprogress files from the location.

 

Thanks

AKR 

avatar
Cloudera Employee

Hi,

 

Added to the above email because if there are too many old files are available in the SHS folder the cleaner may not work as expected. So the ideal way to be delete manually if there are too old .inprogress files.

 

Thanks

AKR