Support Questions

Find answers, ask questions, and share your expertise

HDFS Disk Space


I need Some help with Increasing HDFS disk Space. I have a Disk Mounted to HDFS which is 500 GB. Now its filled to 94% then we increased same disk to 500 to 650 GB. Then I restarted the VM i am able to see in Lsblk Command but not in HDFS Space.

I Believe I don't need to Mount it again because that is already mounted in HDFS Direcotry it suppose to be pick it up when we restart right ?


@Sam Red

Can you please share the lsblk output?


What is df -h /mountpoint output. Hadoop don’t deal with raw disk . Remount and datanode service should work for you.

Rising Star
@Sam Red

The process you did to increase the disk size is totally correct.

You just have to increase the size of the disk and re-mount it back or re-start will also do the same if the entry of the the disk is mentioned in /etc/fstab for auto mounting.

To make sure your disk size is increased use below command and check for the mount on which HDFS filesystem is running:

df -h 

Secondly, lsblk is Linux based native command which will give you blocks on your host, but so far there is no such command for HDFS filesystem and HDFS doesn't deal with the RAW disk or blocks mounted on your host which can be seen via list block command on OS like: Linux, Unix.

Just to add up if you are curious to run your linux native commands on HDFS filesystem then try configuring the NFS Gateway to mount your HDFS filesystem like local mount points on linux hosts.

Refer to apache documentation on how to configure NFS gateway. Also, you can refer to Horton Works documentation for same.

I hope this helps answering your query.



Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.