Like you said, i removed /tmp from that directories list and the capacity of all the nodes reduced to 40 gb or less including slave 4
I have never seen the same number for all the slave nodes because of the data distribution.
To overcome uneven block distribution scenario across the cluster, a utility program called balancer
I will make it clear.The Cluster is new and it don't have much data in it. As per my understanding, Available capacity is the storage available for data node(hdfs) , if i am not wrong. The actual hard disk size of each node being 500 GB and the available capacity for 5 of them is far to less than the slave 4.root folder disk capacity has more than 400gb space allocated and the same should be allocated to hdfs. My concern is where did the rest of the space go? How did the distribution thing come here when my only concern is about hdfs capacity. PFA.
dfs.datanode.data.dir have: /opt/hadoop/hdfs/data,/tmp/hadoop/hdfs/data,/usr/hadoop/hdfs/data,/usr/local/hadoop/hdfs/data
All the nodes have same mounts.
As expected, the problem is with the disks allocated to datanode settings.
Ambari picks up all the mounts except /boot and /mnt
You were suppose to modify the settings during the installs. As you can see, data is going on /opt and other mounts and you were suppose to give only /hadoop " / has 400GB"
Now , there is no way we want to store the data on /tmp
You need to create a directory as /hadoop and modify the settings to read the data from /hadoop.