Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

/hadoop/yarn/local/usercache/hive/filecache/ growing large

avatar
Explorer

I have 5 Hadoop/HDFS nodes running RHEL 7.9, Ambari 2.7.5.0, HDP 3.1.5.

 

/hadoop/yarn/local/usercache/hive/filecache/ on all of the Hadoop nodes is growing very large and I want to decrease its size.

 

Is there a YARN setting I can change to lower the amount of filecache specified above?  Do I need to manually delete directories that are X days old to clean this up?

 

Please advise.  Thanks! Mike

1 ACCEPTED SOLUTION

avatar
Expert Contributor

Hello @MikeB 

As for automatic clean up not getting triggered, well it may be due to (or at-least related to) this unresolved bug reported on Yarn.
https://issues.apache.org/jira/browse/YARN-4540
You can reduce the below property value to lower the local file cache size.
yarn.nodemanager.localizer.cache.target-size-mb

View solution in original post

3 REPLIES 3

avatar
Expert Contributor

Hello @MikeB 

As for automatic clean up not getting triggered, well it may be due to (or at-least related to) this unresolved bug reported on Yarn.
https://issues.apache.org/jira/browse/YARN-4540
You can reduce the below property value to lower the local file cache size.
yarn.nodemanager.localizer.cache.target-size-mb

avatar
Explorer

@balajip Is there a preferred way to add that parameter (and the other one in the issue you linked)?  Like, should I add it on via Ambari GUI -> YARN configs or edit the proper file (yarn-site.xml?)?  If I edit files, which nodes do I edit the files on?

 

Thanks,

Mike

avatar
Expert Contributor

@MikeB You can find/add the property in Ambari--> YARN--> Configs--> Advanced--> Customer yarn-site --> Add/find Property