Support Questions
Find answers, ask questions, and share your expertise

DFS Remaining is not same to disk remain size



I have a problem.

our HDFS cluster capacity is 300T, but the Non FDS Used is 50T。

cdh version is 5.7.2.

I chose one of datanodes to has 4 disks. and  the configuration  dfs.datanode.du.reserved = 10G.


------hdfs dfadmin report   info ------

Name: (hadoop07)
Hostname: hadoop07
Rack: /default
Decommission Status : Normal
Configured Capacity: 23585072676864 (21.45 TB)
DFS Used: 15178100988126 (13.80 TB)
Non DFS Used: 5833234295881 (5.31 TB)
DFS Remaining: 2573737392857 (2.34 TB)
DFS Used%: 64.35%
DFS Remaining%: 10.91%
Configured Cache Capacity: 4294967296 (4 GB)
Cache Used: 0 (0 B)
Cache Remaining: 4294967296 (4 GB)
Cache Used%: 0.00%
Cache Remaining%: 100.00%
Xceivers: 60




---df -h     info ----

/dev/sda1 5.4T 3.5T 1.9T 65% /dnn/data1
/dev/sdb1 5.4T 3.5T 1.9T 65% /dnn/data2
/dev/sdc1 5.4T 3.5T 1.9T 66% /dnn/data3
/dev/sdd1 5.4T 3.5T 1.9T 66% /dnn/data4



disk remain size is  1.9*4=7.6T

but dfs Remaining is 2.3T,   why these two values are so different.


Master Guru
Are these values (from the dfsadmin -report, specifically) consistent or do
they change over time slightly?

I ask because the DFS used, while mostly derived from df/du stats on Linux,
also include 'virtual' reservations for replicas being written (rbw) and
their block sizes. For ex. if a write is ongoing when you check it, it will
include a full block size as used (and later adjust it down to the real
size when the replica is complete).

Secondly, what does tune2fs -l /dev/sda1 show about filesystem level
reserved blocks? For DN disks you can typically set them to 0 (or 1%) to
fully utilize them.

HI, thanks first for reply. for disk reserved block ,I have done it . tune2fs -m 1 /dev/sda1 ,(and sdb1,sdc1,sdd1) and the value of cluser Non DFS Used is growing slowly。just like From 44T to 54T in 7 days。 how to reduce the Non DFS Used ?

Master Guru
Linux's DF Used column - Linux's DU ( is basically
how non-DFS is determined, which implies there's data outside of the
DataNode disks (such as YARN NodeManager transient storage, etc.). Try
checking the contents of the mounts for data lying outside of the DN
configured paths.


I have done what you said. 


In addition to the DFS directory'/dnn/data1/dfs/dn/current', the data disk of the data node is the YARN container log file. And the log file is only about 5G.

I tried the command   'lsof | grep deleted'  ,and found no deleted files occupied. It's no use restarting YARN's job history alone.

Finally, I had to restart HDFS, and then something magical happened. At the moment of restart completion, Non DFS Used dropped directly from 54T to 4T.

 I'm curious to know how datanode calculates the remaining available capacity of a node. I read the source code, but found no commands, such as du, df.

Master Guru
This may then be explained by open HDFS writer files perhaps, as those
would've closed during a restart. Next time this occurs, try checking the
count of open files via fsck '-openforwrite'? What is your configured block
size for HDFS files.

There were a few issues in past such as (5.5+) and (5.10+) that may be
relevant to the issue observed here and an upgrade can help address these.