Support Questions

Find answers, ask questions, and share your expertise

HDFS capacity is 0 but all DataNode are live

avatar
New Contributor

I have 2 DataNodes and both are live. However, the dashboard shows that the HDFS Disk Usage is n/a, and capacity is zero (see the screenshot).

I tried to put some files to hdfs through "hadoop fs -put hello.txt /", and got the error

 File /hello.txt._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 2 datanode(s) running and no node(s) are excluded in this operation.

I guess this means the NameNode knows the existence of 2 DataNodes, and none is excluded from the "-put" operation?

I have checked the "dfs.datanode.data.dir", and it is pointing to the correct directory with 500G available space.

How can I resolve this issue?

79405-screen-shot-2018-07-05-at-95951-am.png

1 ACCEPTED SOLUTION

avatar

Hey @Yun Ding !
Could you check the following outputs?

hdfs dfs -du -h /
hdfs dfsadmin -report
lsblk df -h

And also check the value for this parameter on Ambari:
dfs.datanode.du.reserved

PS: Just in case, check the permission for the dfs.datanode.data.dir directory, should it be owned by hdfs:hadoop.

Hope this helps!

View solution in original post

4 REPLIES 4

avatar

Hey @Yun Ding !
Could you check the following outputs?

hdfs dfs -du -h /
hdfs dfsadmin -report
lsblk df -h

And also check the value for this parameter on Ambari:
dfs.datanode.du.reserved

PS: Just in case, check the permission for the dfs.datanode.data.dir directory, should it be owned by hdfs:hadoop.

Hope this helps!

avatar
New Contributor

@Vinicius Higa Murakami

it was due to the permission for the dfs.datanode.data.dir directory. Thanks!

avatar

Good to know! 🙂

avatar
New Contributor

Hi there,

is it dfs_datanode_data_dir_perm?

 

what's your previous value for it when it couldn't write?