Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

How to reduce the replication factor in a HDFS directory and it's impact

avatar
New Contributor

We are using Hortonworks HDP 2.1 (HDFS 2.4), with replication factor 3. We have recently decommissioned a datanode and that left a lot of under replicated blocks in the cluster.

Cluster is now trying to satisfy the replication factor by distributing under replicated blocks among other nodes.

  1. How do I stop that process. I am OK with some files being replicated only twice. If I change the replication factor to 2 in that directory, will that process be terminated?
  2. What's the impact of making the replication factor to 2 for a directory which has files with 3 copies. Will the cluster start another process to remove the excess copy for each file with 3 copies?

Appreciate your help on this

5 REPLIES 5

avatar
Super Collaborator

1 . First you need to run hadoop fsck / to check the under-replicated blocks. Then you can run hadoop -setrep 2 to the files which are under-replicated. This will stop the process.

2. Yes it will remove the third copy.

avatar
New Contributor

So eventually, -setrep 2 stops the process and spawn another process (deleting the third copy) right? Is there any way to stop the cluster removing the third copy? I'm trying to reduce the cpu utilization.

avatar
Super Collaborator

No, Namenode does this autometically

avatar
New Contributor

Would you be able to share any references on the second answer?

avatar
Super Collaborator