Support Questions

Find answers, ask questions, and share your expertise

Changing rack awareness in a running Hadoop cluster in production & reducing replication factor.

avatar
Contributor

Hi All,

 

We're running a Hadoop cluster with CDH 5.4.11 in AWS environment and as we're seeing exponential growth in our cluster we decided to reduce the replication factor from 3 to 2 .. also our cluster is not setup with rack awareness.

 

We're planning to reduce the replication factor from 3 to 2 and set rack awareness to have 2 racks (optional : if possible to use racks & subracks), on same day.

 

Can someone suggest what's the best practice 

 

  •  Change rack awareness first then replication factor?
    • If we change the rack awareness do we need to re-start all the services and if we do will it cause any data to be moving across the cluster?
    • If the data moves across the cluster, will it be helpful if we implement replication factore first and then rack awareness to reduce data movement across datanodes 

Thanks in advance!!!

 

 

 

2 ACCEPTED SOLUTIONS

avatar
Champion

Rack awareness service three purposes data locality,data redundacy and reducing the network bandwidth requirement.  The replication factor sets your data redudancy level.

 

It does not seem wise to be abitrarily changing either due to cluster growth.  Simply buy more nodes and expand.

 

To address the original question:

 

1. Changing the replication factor will mark the third block in all sets as bad and remove it.  Due to the write workflow of HDFS that means that the remaining two block will be split between at least two racks.

 

2. Adjusting the rack topology will not impact any existing data.  It will effect MR job performance as now blocks may not be local within the new rack topology.  Newly written data will be split between the two racks.

 

No matter the order, if you do both you will be adding the risk of your two blocks existing withing the same rack.  You can run the balancer immediately after and that should help as the balancer will abide by the new rack topology but it won't touch or move all of the blocks.

 

View solution in original post

avatar
Contributor

Thank you very much @mbigelow

 

I was able to fix the missing blocks issue & replication factor change. 

 

Missing block issue : All the datanodes were included in rack but it's the configuration issue we had in our cluster that caused the issue.

 

Replication factor : Yes we need to change the client configuration value and re-deployed client configuration files and restarted the HDFS , YARN and all other client services that require this update.

 

Following are the links were useful for me to change client configuration files

 

https://www.cloudera.com/documentation/enterprise/5-6-x/topics/cm_mc_mod_configs.html

 

http://grokbase.com/t/cloudera/scm-users/126wjwf5da/setting-the-replication-value

 

View solution in original post

6 REPLIES 6

avatar
Champion

Rack awareness service three purposes data locality,data redundacy and reducing the network bandwidth requirement.  The replication factor sets your data redudancy level.

 

It does not seem wise to be abitrarily changing either due to cluster growth.  Simply buy more nodes and expand.

 

To address the original question:

 

1. Changing the replication factor will mark the third block in all sets as bad and remove it.  Due to the write workflow of HDFS that means that the remaining two block will be split between at least two racks.

 

2. Adjusting the rack topology will not impact any existing data.  It will effect MR job performance as now blocks may not be local within the new rack topology.  Newly written data will be split between the two racks.

 

No matter the order, if you do both you will be adding the risk of your two blocks existing withing the same rack.  You can run the balancer immediately after and that should help as the balancer will abide by the new rack topology but it won't touch or move all of the blocks.

 

avatar
Contributor

Thank you very much.

 

I believe we need to re-start all the services i.e HDFS & YARN after changing the rack topology ... correct me if I'm wrong.

avatar
Champion
Yes you do.

avatar
Contributor

Thanks for quick reply.

 

I have tried this in our cluster and I am facing two issues , could you please help me on below issues

 

1. Namenode reports huge number of blocks as missing (which is less than 1% of blocks in our cluster) , not sure why it's showing those many blocks as missing , is it due to new rack-topology? (I think it's not since if it's due to rack topology it should show as mis-replicated, correct me if I'm wrong)

2. Even after changing the replication factor to 2 in hdfs-site.xml from Cloudera Manager -> HDFS -> configuration -> 'HDFS Service Advanced Configuration Snippet (Safety Valve) for hdfs-site.xml' the replication factor is not changing for the new files .. it's still 3

 

When I run hdfs fsck it shows the default replication factor as 2 but for all the new files the replication factor is still 3.

 

Total blocks (validated): 1 (avg. block size 6380382 B)
********************************
CORRUPT FILES: 1
MISSING BLOCKS: 1
MISSING SIZE: 6380382 B
CORRUPT BLOCKS: 1
********************************
Minimally replicated blocks: 0 (0.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 0 (0.0 %)
Mis-replicated blocks: 1 (100.0 %)
Default replication factor: 2
Average block replication: 0.0
Corrupt blocks: 1
Missing replicas: 0
Number of data-nodes: 34
Number of racks: 4

 

Thanks in advance!!

avatar
Champion

Did you include all existing nodes in the new racks?  That is the only thing can think of.  If you missed one, then it would be considered decommissioned and those blocks would be reported and missing or under replicated until they are replicated to other nodes.  You are correct, it would report as being mis-replicated after the topology change if two replicas were in the same rack.

 

I have seen the replication issue pop up before.  I don't know what the resolution ended up being but it is critical to remember that it is a client side setting, so if a client is still using 3 as the repl factor then that data will have 3 replicas for each block.

avatar
Contributor

Thank you very much @mbigelow

 

I was able to fix the missing blocks issue & replication factor change. 

 

Missing block issue : All the datanodes were included in rack but it's the configuration issue we had in our cluster that caused the issue.

 

Replication factor : Yes we need to change the client configuration value and re-deployed client configuration files and restarted the HDFS , YARN and all other client services that require this update.

 

Following are the links were useful for me to change client configuration files

 

https://www.cloudera.com/documentation/enterprise/5-6-x/topics/cm_mc_mod_configs.html

 

http://grokbase.com/t/cloudera/scm-users/126wjwf5da/setting-the-replication-value