Created 11-10-2019 04:28 AM
Hi all,
I'm have a 9 host cluster on HDP 2.6.5. Recently, a host went down and had to be rebuilt. It was hosting ZK server, DataNode and YARN server. Can I safely delete the host from Ambari web UI as I'm unable to turn off the services?
Thanks.
Created 11-11-2019 12:48 PM
Before you embark on that there are a couple of questions? Have you moved or recreated the zookeeper , YARN, and Datanode on another node?
You should be having 3 zookeepers at least. and how about your YARN server? Usually, when a host crashed and stops sending the heartbeat after a period of time it's excluded from the health nodes.
1. Decommission DataNodes:
# su $HDFS_USER $ hdfs dfsadmin -refreshNodes$HDFS_USER is the user that owns the HDFS services, which is usually hdfs.
# su $HDFS_USER $ hdfs dfsadmin -refreshNodesIf no dfs.include is used, all DataNodes are considered included in the cluster, unless a node exists in a $HADOOP_CONF_DIR/dfs.exclude file.
You can also use the Ambari REST API to achieve that here is a reference
Hope that helps
Created 11-11-2019 12:48 PM
Before you embark on that there are a couple of questions? Have you moved or recreated the zookeeper , YARN, and Datanode on another node?
You should be having 3 zookeepers at least. and how about your YARN server? Usually, when a host crashed and stops sending the heartbeat after a period of time it's excluded from the health nodes.
1. Decommission DataNodes:
# su $HDFS_USER $ hdfs dfsadmin -refreshNodes$HDFS_USER is the user that owns the HDFS services, which is usually hdfs.
# su $HDFS_USER $ hdfs dfsadmin -refreshNodesIf no dfs.include is used, all DataNodes are considered included in the cluster, unless a node exists in a $HADOOP_CONF_DIR/dfs.exclude file.
You can also use the Ambari REST API to achieve that here is a reference
Hope that helps
Created 11-16-2019 09:42 PM
Hi Shelton,
The steps you provided worked perfectly.
Thanks!