Created 04-29-2024 06:51 PM
I used the count command to view the HDFS directory of the cluster, and there is still 9T of storage space left. However, when I upload a 2MB file, I will report that the space is insufficient. Has anyone encountered this situation before
Created on 04-29-2024 07:01 PM - edited 04-29-2024 07:01 PM
My HDFS version number is 3.0.0+cdh6.3.2. Is this a bug?
Created 05-13-2024 02:59 AM
Hi Team,
Did you use hdfs dfsadmin -report command to check the dfs and non-dfs used, dfs is used by HDFS so i blieve 9TB is complete storage with dfs and nondfs. Now when you copying the file to hdfs have you enable the debug
HADOOP_ROOT_LOGGER=DEBUG,console
Also try to run the below command it will copy the file from local to hdfs
hdfs dfs -Ddfs.replication=#noofdatanodes -put /tmp/<3mbfile> /tmp
Created 12-12-2024 09:48 AM
@cc_yang It could be possible you may have enabled HDFS space quota to the directory and the directory may have reached to it hard limit, causing the file upload throws insufficient space message. Refer more about HDFS quota as below.
https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsQuotaAdminGuide.html