Created 04-28-2016 11:38 AM
Our cluster is running on hdp 2.3.4.0 and one of the host is showing disk (/dev/sda1) usage 99% (see the attachment) where as there is enough disk space available in /dev/sdb1. By default ambari selected (I don't know how) /dev/sda1 during hadoop cluster setup. Can I somehow change the disk from /dev/sda1 to /dev/sdb1 without disturbing/loosing any data from the cluster? If not what is the best alternative. Please suggests.
Created 04-28-2016 12:30 PM
From hadoop FAQ on apache,
Hadoop currently does not have a method by which to do this automatically. To do this manually:
However, this is not something that I recommend. A cleaner approach that you can take is decommission node, change the mount point and add it back to the cluster. I say cleaner because directly touching data directory can corrupt your data with a single misstep.
Created 04-28-2016 11:40 AM
Created 04-28-2016 12:30 PM
From hadoop FAQ on apache,
Hadoop currently does not have a method by which to do this automatically. To do this manually:
However, this is not something that I recommend. A cleaner approach that you can take is decommission node, change the mount point and add it back to the cluster. I say cleaner because directly touching data directory can corrupt your data with a single misstep.
Created 04-28-2016 02:48 PM
Hi Ravi, Second approach sounds good to me. Is there a way to decommission node using Ambari? More detail in that approach would really help me
Created 04-28-2016 03:28 PM
Yes. You can decommission node using ambari. https://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.1/bk_Ambari_Users_Guide/content/_how_to_decom...