Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Resizing HDFS

Highlighted

Resizing HDFS

New Contributor

Any suggestions as to how to go about resizing the HDFS filesystem with CDH?

 

I want to free up one disk on each datanode (currently used by HDFS) in order to deploy Kudu with CDH 5.13?

 

Seems kinda risky to just remove the local path from Cloudera Manager.. 

 

Thanks! :)

2 REPLIES 2
Highlighted

Re: Resizing HDFS

Cloudera Employee

Follow the below steps to remove one disk from each node. Make sure you have a replication factor of 3.

1. Confirm there are no under-replicated blocks from Namenode webui. Wait if you see any under-replicated blocks.

2. Go to the configuration of the first data node. And remove the unwanted data directory.
=>CM=>HDFS=>Instances=>Datanode(anyone)=>configuration=>dfs.data.dir(remove the directories which are not required)

This will remove the data directories for that particular datanode.

3. You need to refresh the cluster from the CM after saving the changes.

4. Start from step1 again for the other datanode.

Highlighted

Re: Resizing HDFS

Cloudera Employee

If you'd like to avoid bringing down any specific instances, you can also utilize the hot swap steps in our documentation for drive removal. 

 

It's important to note that whatever steps you decide to use, this should only be done on 1-2 DataNodes at a time or else you risk data loss.

Don't have an account?
Coming from Hortonworks? Activate your account here