Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Can not find parameter in Ambari: dfs.namenode.fs-limits.max-directory-items

avatar
Expert Contributor

Hello,

I am having an issue with /tmp/hive/tmp directory:

org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.FSLimitException$MaxDirectoryItemsExceededException): The directory item limit of /tmp/hive/hive is exceeded: limit=1048576 items=1048576

With a short search i find out that it is a setting with parameter "dfs.namenode.fs-limits.max-directory-items" which is in hdfs-default.xml file. However it is not available for Ambari. Which file should I update? What is the right path? Should i update it on both hosts for HA mode?

1 ACCEPTED SOLUTION

avatar
Super Guru
hide-solution

This problem has been solved!

Want to get a detailed solution you have to login/registered on the community

Register/Login
2 REPLIES 2

avatar
Super Guru
hide-solution

This problem has been solved!

Want to get a detailed solution you have to login/registered on the community

Register/Login

avatar
Expert Contributor

Hello @Aditya Sirna

Thank you for your answer.

I have added the parameter with a value of 0 but got an exception (HDP 2.6.3.0 on CentOS 7.2)

2018-03-20 21:09:48,207 ERROR namenode.FSNamesystem (FSNamesystem.java:<init>(913)) - FSNamesystem initialization failed.
java.lang.IllegalArgumentException: Cannot set dfs.namenode.fs-limits.max-directory-items to a value less than 1 or greater than 6400000

Thus, I doubled the old value (4194304) and now it works.


Will HDFS be removing the tmp dir? Is there any preset period of configuration for that? Otherwise may tmp dir exceed the new limit? Or may hdfs get OOM exception while cleaning it like i got trying to clean manually?

You can check my other question if you have a comment on it.

https://community.hortonworks.com/questions/179904/having-issue-with-tmp-directory-removal.html