03-03-2019 07:25 PM
I have a problem.
our HDFS cluster capacity is 300T, but the Non FDS Used is 50T。
cdh version is 5.7.2.
I chose one of datanodes to ckeck.it has 4 disks. and the configuration dfs.datanode.du.reserved = 10G.
------hdfs dfadmin report info ------
Decommission Status : Normal
Configured Capacity: 23585072676864 (21.45 TB)
DFS Used: 15178100988126 (13.80 TB)
Non DFS Used: 5833234295881 (5.31 TB)
DFS Remaining: 2573737392857 (2.34 TB)
DFS Used%: 64.35%
DFS Remaining%: 10.91%
Configured Cache Capacity: 4294967296 (4 GB)
Cache Used: 0 (0 B)
Cache Remaining: 4294967296 (4 GB)
Cache Used%: 0.00%
Cache Remaining%: 100.00%
---df -h info ----
/dev/sda1 5.4T 3.5T 1.9T 65% /dnn/data1
/dev/sdb1 5.4T 3.5T 1.9T 65% /dnn/data2
/dev/sdc1 5.4T 3.5T 1.9T 66% /dnn/data3
/dev/sdd1 5.4T 3.5T 1.9T 66% /dnn/data4
disk remain size is 1.9*4=7.6T
but dfs Remaining is 2.3T, why these two values are so different.
03-03-2019 09:30 PM
03-03-2019 09:58 PM
03-04-2019 12:47 AM
03-05-2019 06:20 PM
I have done what you said.
In addition to the DFS directory'/dnn/data1/dfs/dn/current', the data disk of the data node is the YARN container log file. And the log file is only about 5G.
I tried the command 'lsof | grep deleted' ,and found no deleted files occupied. It's no use restarting YARN's job history alone.
Finally, I had to restart HDFS, and then something magical happened. At the moment of restart completion, Non DFS Used dropped directly from 54T to 4T.
I'm curious to know how datanode calculates the remaining available capacity of a node. I read the source code, but found no commands, such as du, df.
03-05-2019 07:03 PM