Support Questions

Find answers, ask questions, and share your expertise

how to increase hdfs space

avatar

I have installed single node hadoop cluster with disk space 150 GB , however hdfs disk space is very less . value of dfs.datanode.data.dir is /hadoop/hdfs/data for which space allocated is 2.1 G

du -sh /hadoop/hdfs/data

2.1G / hadoop/hdfs/data

i want to increase the size of this folder or hdfs space .

my second query is how only 2.1g space got allocated by default for hdfs ?

1 REPLY 1

avatar
Super Collaborator
@Anurag Mishra

du -sh /hadoop/hdfs/data shows the space used and not the space available. You should check for the space available on the directory.

You can check the available space using df -h command.

# df -h /hadoop/hdfs/data

Also to know the available space for HDFS you can use hdfs command, which should show the available HDFS space :

#hdfs dfs -df -s /

And to add more space with one datanode you should either add space to underlaying filesystem where /hadoop/hdfs/data is mounted or create additional filesystem something like /hadoop/hdfs/data1 and configure datanode dir(dfs.datanode.data.dir) to have two directory paths in comma separated format.

You can also add HDFS space by adding another datanode to the cluster.