Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Increasing the HDFS Disk size in HDP 2.3 3 node cluster

avatar
Rising Star

In the 3 node cluster installation for POC, My 3rd note is datanode, it has a disk space of about 200 GB.

As per the widget, my current HDFS Usage is as follows:

DFS Used: 512.8 MB (1.02%); non DFS used 8.1 GB (16.52%); remaining 40.4GB (82.46 %)

When I do df -h to check the disk size i can see a lot of space is taken by tmpfs as shown in the following screenshot:

2403-disk-size.png

How can I increase my HDFS disk size?

1 ACCEPTED SOLUTION

avatar
Master Mentor
@Kunal Gaikwad

tmpfs is out of pic.

You need to increase /home space ...Is it a vm? If yes then you can reach out to system admin to allocate more space to /home

If it's bare metal then attach another disk and make it part of your cluster.

View solution in original post

14 REPLIES 14

avatar
Master Mentor
@Kunal Gaikwad

tmpfs is out of pic.

You need to increase /home space ...Is it a vm? If yes then you can reach out to system admin to allocate more space to /home

If it's bare metal then attach another disk and make it part of your cluster.

avatar
Rising Star

Yes the cluster is on VM Vsphere

avatar
Master Mentor

@Kunal Gaikwad That's easy then. See the link that I shared above.

avatar
Rising Star

My /home partition is already 130 GB, but I am not able to use it for HDFS as metioned above as per the gadget. My concern is it should not hamper my HDP installation which I have already done on it

avatar
Master Mentor

@Kunal Gaikwad I believe you are using / for hadoop..Is that correct?

Generally, extending / is complicated than non-root

Install will be fine as long as there is no human error

If possible then you can add 4th nodes in the cluster and decomsission the 3rd one

avatar
Rising Star

Yes i am using / for hadoop. This partitions are done automatically by ambari, so want to increase the HDFS size, for the POC wanted to import a table of size 70 GB, but because of the current HDFS size, I am able to import only 30+ GB's and the job gets hanged with alerts all over the ambari about the disk usage.

avatar
Master Mentor

@Kunal Gaikwad that's why you have to be careful about giving mounts during the install. If possible then add new nodes and make sure to add new disk mounts in the configs.

or extend the existing LUNS

avatar
Rising Star

I am trying to upgrade the storage i have taken the snapshot of the cluster. Just to be sure I need to increase /dev/mappper/centos-root storage size right?