Support Questions

Find answers, ask questions, and share your expertise
Announcements
Welcome to the upgraded Community! Read this blog to see What’s New!

Increasing the HDFS Disk size in HDP 2.3 3 node cluster

avatar

In the 3 node cluster installation for POC, My 3rd note is datanode, it has a disk space of about 200 GB.

As per the widget, my current HDFS Usage is as follows:

DFS Used: 512.8 MB (1.02%); non DFS used 8.1 GB (16.52%); remaining 40.4GB (82.46 %)

When I do df -h to check the disk size i can see a lot of space is taken by tmpfs as shown in the following screenshot:

2403-disk-size.png

How can I increase my HDFS disk size?

1 ACCEPTED SOLUTION

avatar
@Kunal Gaikwad

tmpfs is out of pic.

You need to increase /home space ...Is it a vm? If yes then you can reach out to system admin to allocate more space to /home

If it's bare metal then attach another disk and make it part of your cluster.

View solution in original post

14 REPLIES 14

avatar
@Kunal Gaikwad

tmpfs is out of pic.

You need to increase /home space ...Is it a vm? If yes then you can reach out to system admin to allocate more space to /home

If it's bare metal then attach another disk and make it part of your cluster.

avatar

Yes the cluster is on VM Vsphere

avatar

@Kunal Gaikwad That's easy then. See the link that I shared above.

avatar

My /home partition is already 130 GB, but I am not able to use it for HDFS as metioned above as per the gadget. My concern is it should not hamper my HDP installation which I have already done on it

avatar

@Kunal Gaikwad I believe you are using / for hadoop..Is that correct?

Generally, extending / is complicated than non-root

Install will be fine as long as there is no human error

If possible then you can add 4th nodes in the cluster and decomsission the 3rd one

avatar

Yes i am using / for hadoop. This partitions are done automatically by ambari, so want to increase the HDFS size, for the POC wanted to import a table of size 70 GB, but because of the current HDFS size, I am able to import only 30+ GB's and the job gets hanged with alerts all over the ambari about the disk usage.

avatar

@Kunal Gaikwad that's why you have to be careful about giving mounts during the install. If possible then add new nodes and make sure to add new disk mounts in the configs.

or extend the existing LUNS

avatar

I am trying to upgrade the storage i have taken the snapshot of the cluster. Just to be sure I need to increase /dev/mappper/centos-root storage size right?

avatar

@Kunal Gaikwad

Yes as its your root where you have hadoop data

avatar

@Kunal Gaikwad Did it work out fine?

avatar

yes we did increase the root space without hampering the installation, we did follow simple process of increasing the size. we created a external virtual space and merged with root

avatar
Expert Contributor

how to increase it for no vm please @Neeraj Sabharwal

avatar
Contributor

How did you solve this problem ?

Labels