Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Changing HDFS replication factor on existing files

avatar
New Contributor

Hi Cloudera support team, 

 

We're trying to increase available disk space in our cluster by decreasing a replication factor from 3 to 2 for some of the HDFS directories using the "hdfs dfs -setrep" command.

I have a few questions:

 

  1. How to estimate how much time would this command take for a single directory (without -w)?
  2. Will it trigger a replication job even if I don't use the '-w' flag?
  3. If yes, does it mean that the NameNode will actually start deleting 'over-replicated' blocks of all existing files under a particular directory?

 

Thank you 

2 REPLIES 2

avatar
Master Mentor

@AlexP 

Changing the replication factor doesn't change the replication factor of existing files but only the new files that will be created after issuing the "hdfs dfs -setrep" command
You will have to manually change the replication factor of the old files

To bulk change the replication factor

 

$ hdfs dfs -setrep -R -w 2 /apps/

 

Changing the replication factor of a single file

 

$ hdfs dfs –setrep –w 3 /apps/testfile.txt

 

Reducing the replication factor also speeds the write performance as you are writing to fewer DataNodes and also reduces Namenode metadata but causes overhead in the reads as it's more difficult to find a node that has a replica.


How to estimate how much time would this command take for a single directory (without -w)?
This will depend on the size of the data and your cluster processing power

 

Will it trigger a replication job even if I don't use the '-w' flag?

Once you change the replication factor the internal data block reporting mechanism will kick in to update the Namenode of the replicas and the excess replica sitting on the same data node will be marked as over -replicated and good fro deletion  

 

If yes, does it mean that the NameNode will actually start deleting 'over-replicated' blocks of all existing files under a particular directory?

After reducing the replication factor the data blocks will become over-replicated, the namenode will detect that using the metadata and chooses a replica to remove. The name node will remove a replica from the data node with the least amount of available disk space hence helping to rebalance the load over the cluster.

HDFS fsck is used to check the health of the file system, to find missing files, over replicated, under replicated and corrupted blocks  run the below

To get corrupt or under replicated files

 

$ hdfs dfs fsck /

$ hdfs fsck / | grep 'Under replicated' | awk -F':' '{print $1}' >> /tmp/under_replicated_files

 

Delete Under-replicated blocks in HDFS

 

# To turbo charge use xargs -n 500 (or --max-args 500)

 

$ cat /tmp/under_replicated_files |xargs -n 500 hdfs dfs -setrep 1 /tmp/under_replicated_files

 

You can also put the above commands in a crontab

avatar
Expert Contributor

Hello @AlexP 

 

Ref: https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/FileSystemShell.html#setrep

 

Referring to HDFS document, answers to your questions are inline.

[Q1.] How to estimate how much time would this command take for a single directory (without -w)?

[A1.] It depends upon the numbr of files in the directory. If you are running setrep against a path which is a directory, then the command recursively changes the replication factor of all files under the directory tree rooted at path. The time varies dependsing on the file count under the path/directory. 

 

[Q2.] Will it trigger a replication job even if I don't use the '-w' flag?

[A2.] Yes, replication will trigger without -w flag. However, it is good practice to use -w to ensure all files are having required replication factor set prior to command exits. Please note, the -w flag requests that the command wait for the replication to complete. Though use of -w potentially takes a long time to complete the command but it gurantees the replication factor changed to the specified value. 

 

[Q3.] If yes, does it mean that the NameNode will actually start deleting 'over-replicated' blocks of all existing files under a particular directory?

[A3.] Yes, your understanding is correct. The additonal 1 replica of the block will mark the block as over-replicated and same will be deleted from cluster.  This action will be performed for each files under the directory path keeping only 2 replicas of the file blocks. 

 

Hope this helps.