- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Can not find parameter in Ambari: dfs.namenode.fs-limits.max-directory-items
- Labels:
-
Apache Ambari
-
Apache Hadoop
Created ‎03-20-2018 03:20 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
I am having an issue with /tmp/hive/tmp directory:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.FSLimitException$MaxDirectoryItemsExceededException): The directory item limit of /tmp/hive/hive is exceeded: limit=1048576 items=1048576
With a short search i find out that it is a setting with parameter "dfs.namenode.fs-limits.max-directory-items" which is in hdfs-default.xml file. However it is not available for Ambari. Which file should I update? What is the right path? Should i update it on both hosts for HA mode?
Created ‎03-20-2018 05:25 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
You can add it from Ambari.
Go to Ambari -> HDFS -> Configs -> Advanced -> Custom hdfs-site and add the key (dfs.namenode.fs-limits.max-directory-items)
If you set it to 0, the check will be disabled. Ambari will take care of pushing the config to all the nodes on restart.
.
-Aditya
Created ‎03-20-2018 05:25 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
You can add it from Ambari.
Go to Ambari -> HDFS -> Configs -> Advanced -> Custom hdfs-site and add the key (dfs.namenode.fs-limits.max-directory-items)
If you set it to 0, the check will be disabled. Ambari will take care of pushing the config to all the nodes on restart.
.
-Aditya
Created ‎03-20-2018 06:43 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello @Aditya Sirna
Thank you for your answer.
I have added the parameter with a value of 0 but got an exception (HDP 2.6.3.0 on CentOS 7.2)
2018-03-20 21:09:48,207 ERROR namenode.FSNamesystem (FSNamesystem.java:<init>(913)) - FSNamesystem initialization failed.
java.lang.IllegalArgumentException: Cannot set dfs.namenode.fs-limits.max-directory-items to a value less than 1 or greater than 6400000
Thus, I doubled the old value (4194304) and now it works.
Will HDFS be removing the tmp dir? Is there any preset period of configuration for that? Otherwise may tmp dir exceed the new limit? Or may hdfs get OOM exception while cleaning it like i got trying to clean manually?
You can check my other question if you have a comment on it.
https://community.hortonworks.com/questions/179904/having-issue-with-tmp-directory-removal.html
