Does anyone had encountered following issue while scaling in (down scale) your cluster using Periscope scaling policy. Same issue had also been observed while "Removing nodes" from Cloudbreak UI:
12/5/2017 3:55:18 PM hdpcbdcluster - update failed: New node(s) could not be removed from the cluster. Reason Trying to move '8192' bytes worth of data to nodes with '0' bytes of capacity is not allowed
I just know that 8192 bytes is the linux default block size. The only way I can scale down the cluster is by manually terminating the machine.
you provide us some more information? Which version do you use? From
how many node do you try to scale down to how many nodes? It's possible
that you try to scale down too much (less nodes would remain that necessary for the cluster to work).
@pdarvasi Finally found the solution to this issue. Here are my findings:
When we started HDP using cloudbreak, HDP default configuration had calculated non-HDFS reserved storage "dfs.du.datanode.reserved" (approx 3.5 %) on total disk for the lowest storage configured for a datanode (among the compute config groups) which had three drives and one drive was in TBs. Our default configuration to store data on datanode "dfs.datanode.data.dir" was pointing to a drive with lowest capacity (around 3 % of overall DN storage). This 3 % < 3.5 % had made HDFS capacity as 0% and our existing datanode storage had some supporting directories and files in KBs which had resulted in marking negative KB capacity of the datanode. To fix the downscaling issue, either, we need to lower down non hdfs reserved capacity (lower than 3 %) or point our datanode to higher disk capacity (greater than 3.5 %)
I had tried this and it worked. No more changing WASB URI, therefore, keeping it as a default storage. However, I am thankful to you for making suggestions.