Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

how to resolve the error about /tmp/hive/hive is exceeded: limit=1048576 items=1048576

avatar


hi all

we have ambari cluster ( HDP version - 2.5.4 )

in the spark thrift log we can see the error about - /tmp/hive/hive is exceeded: limit=1048576 items=1048576

we try to delete the old files under /tmp/hive/hive , but there are a million of files and we cant delete them because

hdfs dfs -ls /tmp/hive/hive   

isn't return any output


any suggestion ? how to delete the old files in spite there are a million of files?

or any other solution/?



* for now spark thrift server isn't started successfully because this error , also hiveserver2 not started also

Caused by: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.FSLimitException$MaxDirectoryItemsExceededException): The directory item limit of /tmp/hive/hive is exceeded: limit=1048576 items=1048576       at org.apache.hadoop.ipc.Server$Han
dler.run(Server.java:2347)


second

can we purge the files? by cron or other?


hdfs dfs -ls /tmp/hive/hive
Found 4 items
drwx------   - hive hdfs          0 2019-06-16 21:58 /tmp/hive/hive/2f95f6a5-76ad-487e-968c-1873264a3a9c
drwx------   - hive hdfs          0 2019-06-16 21:45 /tmp/hive/hive/368d201c-cedf-48dc-bbad-f13d6aed7016
drwx------   - hive hdfs          0 2019-06-16 21:58 /tmp/hive/hive/717fb013-535b-4279-a12e-4fc4261c4d68


Michael-Bronson
31 REPLIES 31

avatar
Master Mentor

@Michael Bronson

"Mycluster" needs to be replaced with the "fs.defaultFS" parameter of your HDFS config.


avatar
Master Mentor

In case of NameNode enabled cluster the "dfs.nameservices" is defined. so based on the "dfs.nameservices" the "fs.defaultFS" is determined.


For example if "dfs.nameservices=mycluster" then the "fs.defaultFS" will be ideally "hdfs://mycluster"


If there is No NameNode HA enabled then the "fs.defaultFS" will be pointing to NameNode host/port

avatar

@Jay , in my cluster ( HDFS --> config ) I see dfs.nameservices=hdfsha ,

so it should be like this?


hadoop fs -rm -r -skipTrash hdfs://hdfsha/tmp/hive/hive/ 
Michael-Bronson

avatar

@Jay actually it should be like this


hadoop fs -rm -r -skipTrash hdfs://hdfsha/tmp/hive/hive/*

need to add the "*" after slash in order to delete only the folders under /tmp/hive/hive and not the sub folder itself (/tmp/hive/hive)

Michael-Bronson

avatar
Master Mentor

@Michael Bronson

Looks good. Yes in your command mycluster need to be replaced with hdfsha

avatar

@jay - nice


I see there the option:


hadoop fs -rm -r -skipTrash hdfs://mycluster/tmp/hive/hive/ 


this option will remove all folders under /tmp/hive/hive


but what is the value - mycluster ? ( what I need to replace instead that ?

Michael-Bronson

avatar
Master Mentor

@Michael Bronson

Whenever you change the parameter this config the cluster needs to be aware of the changes. When you start Ambari the underlying components don't get started unless you explicitly start those components!

So you can start Ambari without stating YARN or HDFS

avatar

@Geoffrey Shelton Okot - do you mean to restart the ambari server ? as ambari server restart? instead to restart the HDFS and YARN services ? ( after we set dfs.namenode.fs-limits.max-directory-items )

Michael-Bronson

avatar
Master Mentor

@Michael Bronson

the parameter "dfs.namenode.fs-limits.max-directory-items " is HDFS specific hence the & HDFS dependent services and HDFS Dependent service components needs to be restarted. In Ambari UI it will show the required service components that needs to be restarted.

No need to restart Ambari Server.

avatar

second when I saved the parameter in ambari - Ambari -> HDFS -> Configs -> Advanced -> Custom hdfs-site

dfs.namenode.fs-limits.max-directory-items=2097152

109375-capture.png


I GET:


The configuration changes could not be validated for consistency due to an unknown error. Your changes have not been saved yet. Would you like to proceed and save the changes? 


109383-capture.png

dose this parameter supported in HDP version - 2.6.4?

Michael-Bronson