Support Questions
Find answers, ask questions, and share your expertise
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

limit size appcache for long running spark job

limit size appcache for long running spark job




Cluster is Cloudera 5.13.3, using Spark 2.3


There's a long running Spark job, basically running as a daemon, getting data from Kafka

the issue is with the size of the Yarn user appcache for this specific job (stored in yarn.nodemanager.local-dirs), which keeps growing until the job is restarted


I would like to limit the size of the appcache, without having to stop the long running job, is that possible ?


Don't have an account?
Coming from Hortonworks? Activate your account here