Created 01-04-2016 05:24 PM
We have two use cases--one is the normal slight imbalance that can creep up gradually and the other is when we add new nodes. Ten new nodes can be 100TB+ to move around--it can take a very long time with normal dfs.network.bandwidth.persecond setting. What's a good strategy? Is it reasonable to use chron to reset the value during off hours? What's the best practice? Also, does rebalancing defer to normal processing dynamically?
Created 01-05-2016 11:37 PM
I disagree with some recommendations in the earlier answer.
We do not recommend bringing down DataNodes to trigger balancing via re-replication. This will create needless load on the cluster and hurt data availability. Also it is impractical to modify `dfs.datanode.balance.bandwidthPerSec` (I assume that's what you meant) based on cluster usage.
We do recommend running the balancer periodically during times when the cluster load is expected to be lower than usual. Recent fixes to the balancer have improved its performance. See HDFS-8818, HDFS-8824 and HDFS-8826. These fixes were back-ported to HDP maintenance releases and are available in HDP 2.2.8 and HDP 2.3.2.
Created 01-05-2016 12:26 AM
@Peter Coates A good strategy if you are able to is to add a few nodes at a time, for example two or three and then wait to have these nodes be allocated with new file data before adding others.
If you add all ten nodes at once, then yes , the cluster would be moderate to severely imbalanced based on what your node count and utilization was before.
You can also selectively put one or two existing data nodes into maintenance mode and shut it down, and wait for its blocks to replicate before bringing it back up again. The rebalancer does not defer to normal processing dynamically. Changing dfs.network,bandwidth.persecond setting to be higher in off hours sounds reasonable to me.
Created 01-05-2016 11:37 PM
I disagree with some recommendations in the earlier answer.
We do not recommend bringing down DataNodes to trigger balancing via re-replication. This will create needless load on the cluster and hurt data availability. Also it is impractical to modify `dfs.datanode.balance.bandwidthPerSec` (I assume that's what you meant) based on cluster usage.
We do recommend running the balancer periodically during times when the cluster load is expected to be lower than usual. Recent fixes to the balancer have improved its performance. See HDFS-8818, HDFS-8824 and HDFS-8826. These fixes were back-ported to HDP maintenance releases and are available in HDP 2.2.8 and HDP 2.3.2.
Created 06-23-2016 05:09 PM
We had a similar issue in the past doubling a cluster size by adding new nodes. We were stuck on an older Hadoop version without balancer optimizations. We solved the issue by temporarily doubling file replication factor, that triggered under-replication aggressive threashold. We did it in multiple steps during off-peak times. After replication completed we restored previous replication factors and the cluster was magically balanced 🙂
Created 05-17-2017 07:39 AM
Is there anyway to get some sort of guesstimation on how long a rebalance will take?