Community Articles

Find and share helpful community-sourced technical articles.
avatar

A customer wants to use Cloudbreak for deploying Hadoop Clusters. They want to scale up and down the hadoop storage nodes.

a) How does HDFS detect Scale down of nodes and will it kick in HDFS Rebalance

- Cloudbreak instructs Ambari via decommission REST Api call to decommission a DataNode and NodeManager.

- Ambari triggers the decomission on the HDP cluster, from birds perspective this is what happens: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.2/bk_Sys_Admin_Guides/content/ref-a179736c-eb... but in automated way.

- Since the decommission of DataNodes can take a long time if you have a lot of blocks; HDFS needs to replicate blocks belonging to decommissioning DataNodes to other live DataNodes to reach the replication factor that you have specified via dfs.replication in hdfs-site.xml. The default value of this replication factor is 3.

- You can get feedback from the decomission process e.g from NameNode UI: http://ip of_namenode:50070/dfshealth.html#tab-datanode or you can use command line tools like "hdfs fsck /"

- Cloudbreak periodically polls Ambari about the status of decomissioning and Ambari monitors the NameNode

- if the decomissioning is finished then Cloudbreak removes the node from Ambari and delete the decomissioned VMs from cloud providers

b) For Scale Up, would we need to manually kick off hdfs rebalance?

- Cloudbreak does not trigger HDFS rebalance.

c) How do you know if you have lost a block, example if you scale down 8 out of your 10 nodes, how would hdfs handle this case. Assuming you have enough storage in the 2 nodes.

- HDFS: If you do not have enough live DataNodes to reach the replication factor, decommission process would hang until more DataNodes become available (e.g., if you have 10 DataNodes in your cluster with dfs.replication is set to 3 then you are able to scale down you cluster to 3 nodes)

- Cloudbreak: if you have 10 DataNodes with replication factor of 3 then Cloudbreak don't even let you remove more than 7 instances and you get back a "Cluster downscale failed.: There is not enough node to downscale. Check the replication factor and the ApplicationMaster occupation." error message

1,716 Views
Comments
avatar

So I had some internal discussion and the real answer is dynamic scaling down is hard to achieve. You can scale down using cloudbreak, but cloudbreak does a decommissioning of the service before it kills the docker image. So you can technically do it, but as you do, HDFS will try to relocate the replicas which is going to be time consuming.

The alternate is to use something like WASB, where the data is not in HDFS local store but in WASB. The storage and compute are separate so you can turn down instances easily.