- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
how to increase hdfs space
- Labels:
-
Apache Hadoop
Created ‎07-15-2018 02:28 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have installed single node hadoop cluster with disk space 150 GB , however hdfs disk space is very less . value of dfs.datanode.data.dir is /hadoop/hdfs/data for which space allocated is 2.1 G
du -sh /hadoop/hdfs/data
2.1G / hadoop/hdfs/data
i want to increase the size of this folder or hdfs space .
my second query is how only 2.1g space got allocated by default for hdfs ?
Created ‎07-15-2018 03:13 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
du -sh /hadoop/hdfs/data shows the space used and not the space available. You should check for the space available on the directory.
You can check the available space using df -h command.
# df -h /hadoop/hdfs/data
Also to know the available space for HDFS you can use hdfs command, which should show the available HDFS space :
#hdfs dfs -df -s /
And to add more space with one datanode you should either add space to underlaying filesystem where /hadoop/hdfs/data is mounted or create additional filesystem something like /hadoop/hdfs/data1 and configure datanode dir(dfs.datanode.data.dir) to have two directory paths in comma separated format.
You can also add HDFS space by adding another datanode to the cluster.
