Member since
06-19-2020
4
Posts
0
Kudos Received
0
Solutions
04-12-2021
11:52 PM
Hi @JGUI , There is no requirement for deleting the data from the datanode that is going to be decommissioned. Once the DN is been decommissioned all the blocks in the DN would be replicated to a different DN. And is there any error that you are encountering while you are decommissioning ? Typically, HDFS would self-heal and would re-replicate the under-replicated blocks that are due to the DN that is been decommissioned. And NN would start replicating the blocks with the other two replication that is present in HDFS.
... View more
06-19-2020
07:28 AM
<edit> Sorry I thought this was a new post. @guido please do not respond to old posts with new solutions. Ambari 2.6.0 Bug is not applicable here. @apappu The version on the end of the package is how they handled having multiple versions available in the repos as well as facilitating the upgrade process from one version to another. Using the variable scope in the python allowed ambari code to be dynamic across all the different versions, environments, etc. If your repos are setup right, this should not be an issue. I have seen some failure in the public repos lately, slow to respond, or blocked by certain cloud providers causing the "no package found" errors. If you are running your own private repos and have an issue like this, you can just create the packages you need in the version you want using the rpmrebuild command on an existing rpm.
... View more