Archives of Support Questions (Read Only)

This is an archived board for historical reference. Information and links may no longer be available or relevant
Announcements
This board is archived and read-only for historical reference. To ask a new question, please post a new topic on the appropriate active board.

Non DFS Used is reported much bigger than bash 'df -h' reports, reducing DFS remaining

avatar
Contributor

Seeing this issue on all data nodes.

 

Example for one node:

 

hadoop has its own partition
bash 'du -h --max-depth=1' in hadoop partition reports 'dn' directory is consuming 207G
bash 'df -h' reports hadoop partition size 296G, used 208G,Avail 73G, Use% 75%

 

Configured Capacity: 314825441690 (293.20 GB)  -- good
DFS Used: 221825508284 (206.59 GB)  -- good
Non DFS Used: 55394479116 (51.59 GB)  -- ??? bash says 1G used outside of 'dn' directory in the partition
DFS Remaining: 37605454290 (35.02 GB) -- ??? bash says 73G free
DFS Used%: 70.46%
DFS Remaining%: 11.94%

 

fsck reports healthy

 

redhat 6.9

5.8.2-1.cdh5.8.2.p0.3

 

dfs.datanode.du.reserved == 1.96GiB

how to trouble-shoot ?

 

thanks.

 

hdfs dfsadmin -report

Configured Capacity: 1574127208450 (1.43 TB)
Present Capacity: 1277963063885 (1.16 TB)
DFS Remaining: 410632669242 (382.43 GB)
DFS Used: 867330394643 (807.76 GB)
DFS Used%: 67.87%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0

 

hdfs fsck /

 Total size:    281353009325 B
 Total dirs:    5236
 Total files:   501295
 Total symlinks:                0 (Files currently being written: 37)
 Total blocks (validated):      501272 (avg. block size 561278 B)
 Minimally replicated blocks:   501272 (100.0 %)
 Over-replicated blocks:        0 (0.0 %)
 Under-replicated blocks:       0 (0.0 %)
 Mis-replicated blocks:         0 (0.0 %)
 Default replication factor:    3
 Average block replication:     3.0
 Corrupt blocks:                0
 Missing replicas:              0 (0.0 %)
 Number of data-nodes:          5
 Number of racks:               1

1 ACCEPTED SOLUTION

avatar
Contributor

This problem is HDFS-9530 which has a fix in CDH-5.9.0.

 

Bouncing the DN instances cleared this issue manually until we upgrade.

 

View solution in original post

1 REPLY 1

avatar
Contributor

This problem is HDFS-9530 which has a fix in CDH-5.9.0.

 

Bouncing the DN instances cleared this issue manually until we upgrade.