Support Questions

Find answers, ask questions, and share your expertise

HDP Cloudbreak Periscope downscaling issue

avatar

Hello Everyone,

Does anyone had encountered following issue while scaling in (down scale) your cluster using Periscope scaling policy. Same issue had also been observed while "Removing nodes" from Cloudbreak UI:

12/5/2017 3:55:18 PM hdpcbdcluster - update failed: New node(s) could not be removed from the cluster. Reason Trying to move '8192' bytes worth of data to nodes with '0' bytes of capacity is not allowed

I just know that 8192 bytes is the linux default block size. The only way I can scale down the cluster is by manually terminating the machine.

Regards,

Sakhuja

1 ACCEPTED SOLUTION

avatar
9 REPLIES 9

avatar
Expert Contributor

Hi @Abhishek Sakhuja,

Could you provide us some more information? Which version do you use? From how many node do you try to scale down to how many nodes? It's possible that you try to scale down too much (less nodes would remain that necessary for the cluster to work).

avatar

@mmolnar Thank you for your response. These errors are almost 80 % times while I am downscaling, However, 20 % are a success. So, this is what I have done:

Cluster configuration:

min: 2; max: 3 and cooldown period: 30 mins

Master node:1 and worker nodes: 2

Scaling down 1 worker node from the cluster gives this error.

HDP Version - 2.5

Cloudbreak Version - 1.16.4

Regards,

Sakhuja

avatar

@Abhishek Sakhuja

The error message is an indication of your HDFS running out of space.

The amount of free space is fetched from Ambari and calculated like the following:

def Map<String, Map<Long, Long>> getDFSSpace() {
  def result = [:]
  def response = utils.slurp("clusters/${getClusterName()}/services/HDFS/components/NAMENODE", 'metrics/dfs')
  log.info("Returned metrics/dfs: {}", response)
  def liveNodes = slurper.parseText(response?.metrics?.dfs?.namenode?.LiveNodes as String)
  if (liveNodes) {
    liveNodes.each {
      if (it.value.adminState == 'In Service') {
        result << [(it.key.split(':')[0]): [(it.value.remaining as Long): it.value.usedSpace as Long]]
      }
    }
  }
  result
}

Please check Ambari UI, it can be that Ambari calculates the free space incorrectly.

Hope this helps!

avatar

@pdarvasi Thank you for helping in clarifying the question. Let me point the default HDFS location to cloud storage and check if same issue persist.

Regardsm

Sakhuja

avatar

@Abhishek Sakhuja Do you have any updates on this one, have you managed to get this working? If you consider your original question answered, would you pls. consider accepting the answer?

avatar

@pdarvasi Sorry but still same because I have my default HDFS location as Azure blob storage instead of local. Trying to overcome!

avatar

@pdarvasi I am still stuck at the same issue. Is it possible to edit the jar "AmbariDecommissioner.java"? If yes, where I can find on cloudbreak?

Thanks for your help in advance!

avatar

avatar

@pdarvasi Finally found the solution to this issue. Here are my findings:

When we started HDP using cloudbreak, HDP default configuration had calculated non-HDFS reserved storage "dfs.du.datanode.reserved" (approx 3.5 %) on total disk for the lowest storage configured for a datanode (among the compute config groups) which had three drives and one drive was in TBs. Our default configuration to store data on datanode "dfs.datanode.data.dir" was pointing to a drive with lowest capacity (around 3 % of overall DN storage). This 3 % < 3.5 % had made HDFS capacity as 0% and our existing datanode storage had some supporting directories and files in KBs which had resulted in marking negative KB capacity of the datanode. To fix the downscaling issue, either, we need to lower down non hdfs reserved capacity (lower than 3 %) or point our datanode to higher disk capacity (greater than 3.5 %)

I had tried this and it worked. No more changing WASB URI, therefore, keeping it as a default storage. However, I am thankful to you for making suggestions.