Reply
New Contributor
Posts: 4
Registered: ‎06-07-2017

Resizing HDFS

Any suggestions as to how to go about resizing the HDFS filesystem with CDH?

 

I want to free up one disk on each datanode (currently used by HDFS) in order to deploy Kudu with CDH 5.13?

 

Seems kinda risky to just remove the local path from Cloudera Manager.. 

 

Thanks! :)

Cloudera Employee
Posts: 4
Registered: ‎07-30-2018

Re: Resizing HDFS

Follow the below steps to remove one disk from each node. Make sure you have a replication factor of 3.

1. Confirm there are no under-replicated blocks from Namenode webui. Wait if you see any under-replicated blocks.

2. Go to the configuration of the first data node. And remove the unwanted data directory.
=>CM=>HDFS=>Instances=>Datanode(anyone)=>configuration=>dfs.data.dir(remove the directories which are not required)

This will remove the data directories for that particular datanode.

3. You need to refresh the cluster from the CM after saving the changes.

4. Start from step1 again for the other datanode.

Cloudera Employee
Posts: 52
Registered: ‎09-08-2017

Re: Resizing HDFS

[ Edited ]

If you'd like to avoid bringing down any specific instances, you can also utilize the hot swap steps in our documentation for drive removal. 

 

It's important to note that whatever steps you decide to use, this should only be done on 1-2 DataNodes at a time or else you risk data loss.

Announcements