We have 4 datanode HDFS cluster ...there is large amount of space available on each data node of about 98gb ...but when i look at the datanode information .. it's only using about 10gb and running out of space ...
How can we make it use all the 98gb and not run out of space as indicated in image
this is the disk space configuration
this is the hdfs-site.xml on name node
<property> <name>dfs.name.dir</name> <value>/test/hadoop/hadoopinfra/hdfs/namenode</value> </property>
this is the hdfs-site.xml under data node
<property> <name>dfs.data.dir</name> <value>/test/hadoop/hadoopinfra/hdfs/datanode</value> </property>
Eventhough /test has 98GB and hdfs is configured to use it it's not using it
Am I missing anything while doing the configuration changes? And how can we make sure 98GB is used?
Property dfs.data.dir & dfs.name.dir are deprecated. Please use the following properties.