- last edited on
Cluster is Cloudera 5.13.3, using Spark 2.3
There's a long running Spark job, basically running as a daemon, getting data from Kafka
the issue is with the size of the Yarn user appcache for this specific job (stored in yarn.nodemanager.local-dirs), which keeps growing until the job is restarted
I would like to limit the size of the appcache, without having to stop the long running job, is that possible ?