Support Questions

Find answers, ask questions, and share your expertise

how to check all component on master are stop while ambari server is down

avatar

by the following API we can check component status on the relevant master machine,

my target is to verify that all componet are stop before reboot the machine , but in case ambari server is down then we cant to use the API

my quastion is - what the other alternative to check all componet are really stop on master machine ( in case ambari server is down )

 curl -u admin:admin -H "X-Requested-By: ambari" -X GET http://master02.sys65.com:8080/api/v1/clusters/HDP/hosts/master02.sys65.com/host_components?fields=H...
Michael-Bronson
1 ACCEPTED SOLUTION

avatar

@Michael Bronson - greetings to a fellow Simpsonified!

If the Ambari server is down, then there are obviously problems with the system, and it is a little tricky to say what to do -- A complete answer would look more like a decision tree than a single response. Nevertheless, I'll try to provide some help, with the understanding that a complete answer is beyond the scope of a simple Q&A.

First off, why not restart the Ambari service? That's by far the simplest answer. Give it a few minutes to check its connectivity with the other services, and you should be able to proceed via Ambari. Really, this is the only good solution.

If you really need to do it the hard way, there are two basic choices: (a) If you know a little about each of the services, you can use service-specific CLIs on each of the masters to check status and/or stop each service. (b) Alternatively, you can use the fact that these services essentially all run in Java and are typically installed on "hdp" paths in the filesystem, to use linux (or OS equivalent) commands to find and kill them.

Option (a) requires a little research, and depends on which services are running in the cluster. For instance, you can log into the HDFS Master and use the commands summarized here: https://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Shu... to shutdown the core Hadoop services. Each of the other major components have similar CLIs, which some diligent websearching on the Apache sites will find. They allow querying the current status of the services, and provide reasonably clean shutdowns.

The final option assumes you are a capable Linux administrator, as well as understanding how HDP was installed. If not, you shouldn't try this.

Option (b) is rude, crude, and depends overmuch on the famed crash-resilience of Hadoop components. Do NOT take this answer as Hortonworks advice -- to the contrary, it is the equivalent of pulling the plug on your servers, something you should only do on a production system if there are no other alternatives. But the fact is, if you have admin privileges, doing `ps auxwww|grep java|grep -i hdp` (assuming all services have been installed on paths that include the word 'hdp' or 'HDP') on each host in the cluster should elucidate all HDP-related processes still running (and maybe some others; do check the results carefully). If you see fit to kill them ... that's your responsibility. Very important if at all possible to quiesce the cluster first, at least by stopping data inputs and waiting a few minutes. Note the Ambari Agent is typically installed as a service with auto-restart; it is resilient and stateless, so not necessary to stop it before rebooting the server, but you could run `ambari-agent stop` (on each server) to make sure it stays out of the way while you work on the other services. Rebooting the server should restart it too.

Hope this helps.

View solution in original post

1 REPLY 1

avatar

@Michael Bronson - greetings to a fellow Simpsonified!

If the Ambari server is down, then there are obviously problems with the system, and it is a little tricky to say what to do -- A complete answer would look more like a decision tree than a single response. Nevertheless, I'll try to provide some help, with the understanding that a complete answer is beyond the scope of a simple Q&A.

First off, why not restart the Ambari service? That's by far the simplest answer. Give it a few minutes to check its connectivity with the other services, and you should be able to proceed via Ambari. Really, this is the only good solution.

If you really need to do it the hard way, there are two basic choices: (a) If you know a little about each of the services, you can use service-specific CLIs on each of the masters to check status and/or stop each service. (b) Alternatively, you can use the fact that these services essentially all run in Java and are typically installed on "hdp" paths in the filesystem, to use linux (or OS equivalent) commands to find and kill them.

Option (a) requires a little research, and depends on which services are running in the cluster. For instance, you can log into the HDFS Master and use the commands summarized here: https://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Shu... to shutdown the core Hadoop services. Each of the other major components have similar CLIs, which some diligent websearching on the Apache sites will find. They allow querying the current status of the services, and provide reasonably clean shutdowns.

The final option assumes you are a capable Linux administrator, as well as understanding how HDP was installed. If not, you shouldn't try this.

Option (b) is rude, crude, and depends overmuch on the famed crash-resilience of Hadoop components. Do NOT take this answer as Hortonworks advice -- to the contrary, it is the equivalent of pulling the plug on your servers, something you should only do on a production system if there are no other alternatives. But the fact is, if you have admin privileges, doing `ps auxwww|grep java|grep -i hdp` (assuming all services have been installed on paths that include the word 'hdp' or 'HDP') on each host in the cluster should elucidate all HDP-related processes still running (and maybe some others; do check the results carefully). If you see fit to kill them ... that's your responsibility. Very important if at all possible to quiesce the cluster first, at least by stopping data inputs and waiting a few minutes. Note the Ambari Agent is typically installed as a service with auto-restart; it is resilient and stateless, so not necessary to stop it before rebooting the server, but you could run `ambari-agent stop` (on each server) to make sure it stays out of the way while you work on the other services. Rebooting the server should restart it too.

Hope this helps.