Support Questions

Find answers, ask questions, and share your expertise

Running out of space: how to increase disk capacity on my cluster

Explorer

I asked a question yesterday about increasing disk capacity on my cluster in Ambari. After lots and lots of reading, I think I didn't ask the question correctly.

To start, I'm getting several warnings that I'm running out of HDFS disk:

74561-screen-shot-2018-05-24-at-102212-am.png

74560-screen-shot-2018-05-24-at-101056-am.png

Since I'm on a virtualized GCP environment, I just changed the disk capacities from 10 to 20 Gb on my 4 VMs.

Then I went into the CLI on each node and ran df -h

Obviously, I don't see the extra capacity I added since it's not been mounted, but this is where I get confused.

Looking at this thread:

https://community.hortonworks.com/questions/21687/how-to-increase-the-capacity-of-hdfs.html

I should create a directory at / and another in /home, point one to the other, change ownership and permissions, and then mount it.

But reading this thread:

https://community.hortonworks.com/questions/21212/configure-storage-capacity-of-hadoop-cluster.html

I just need to create a directory, tell Ambari where that directory is under HDFS --> Configs (tab) --> DataNode box and restart the cluster.

Problem is, I've tried both methods and neither seem to be working for me. What am I doing wrong?

TIA!


screen-shot-2018-05-24-at-102654-am.pngscreen-shot-2018-05-24-at-102629-am.png
2 REPLIES 2

Mentor

@Mike Wong

Adding disk size in the console isn't enough you will have to run resize2fs on each node where you increased the disk. I think this will walk you through Online resizing of persistent disk GCP

Hope that helps

If you found this answer addressed your question, please take a moment to log in and click the "accept" link on the answer.

Mentor

@Mike Wong

Any updates?