Member since
10-09-2015
62
Posts
48
Kudos Received
14
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3234 | 09-07-2017 07:15 PM | |
3198 | 06-26-2017 08:24 PM | |
2379 | 02-03-2017 08:21 PM | |
2847 | 01-28-2017 12:32 AM | |
2201 | 01-12-2017 07:39 PM |
01-11-2017
08:44 PM
1 Kudo
@Timothy Spann you can definitely use ambari granular APIs to delete and add back Hive interactive host on another host. But Hive interactive requires other clients (Slider,Tez, etc) as well to work. so you need to also add them on new host. Rather I will suggest easiest way would be from UI is to disable interactive query (interactive-query.png) and save the settings (this automatically stops and deletes Hive Server Interactive from existing host), wait for all background operations to be completed and then enable it again . While enabling back interactive query, user will be asked to choose the HS2 interactive host and when the settings are saved, selected host will have HS2 interactive installed along with other dependent clients installation. yarn queue should be automatically refreshed and HS2 will be running on new host Also at anytime you can go to hive service summary page and hover over HS2 interactive to know what host it is presently installed on (hs2-interactive-hostname.png). This does not require clicking and actually navigating to HS2 interactive host to identify HS2 Interactive hostname.
... View more
01-11-2017
07:40 PM
@Sanket Korgaonkar
I am glad that the suggestion helped you. Please accept the answer. So it can be a valid verified reference for other people who encounter similar problem in future.
... View more
01-11-2017
07:19 PM
1 Kudo
@Sanket Korgaonkar It seems that your API semantics are correct but there is a typo in the service name and component name in the API call. It should be "HBASE_CLIENT" instead of "Hbase_Client" and "HBASE" instead "Hbase" Can you retry after making that change and let us know if that worked for you. Below is the API call with the suggested changes. Please replace cluster name and host names as needed curl -u admin:admin -H 'X-Requested-By: ambari' -X POST -d '{ "RequestInfo":{ "command":"RESTART", "context":"Restart HBase client on c6401.ambari.apache.org", "operation_level":{ "level":"HOST", "cluster_name":"c1" } }, "Requests/resource_filters":[ { "service_name":"HBASE", "component_name":"HBASE_CLIENT", "hosts":"c6401.ambari.apache.org" } ] }' 'http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/requests'
... View more
01-04-2017
08:59 PM
4 Kudos
The feature of being able to download client configs after cluster deployment was initially added to Ambari in 1.7.0 version. This feature helps Hadoop admin in following known use cases: This can be helpful if Hadoop admin or some other developer wants to setup a client host that is not being managed by ambari. Ambari as of now (2.5.0 version) does not support multiple cluster and if user environment has multiple clusters with same set of client hosts that needs to be configured to interact with all clusters, then these hosts cannot be added to any one particular ambari cluster and needs to be managed outside of the ambari. In this scenario being able to download all client configs can be a helpful feature to set up these client hosts. Further this feature can also be helpful if user wants to create automation scripts for hadoop application running on an ambari managed cluster without using any existing node of the ambari cluster for running automation code. This will require to do setup of the application which will also include hadoop clients set up on automation dedicated hosts. As stated earlier this feature was added first in Ambari-1.7.0. This feature has been further enhanced with newer APIs in Ambari-2.5.0. Let's go through each supported API related to downloading client configurations. 1. Download client configuration for a service component. (supported since Ambari-1.7.0) This API downloads base configs for the client component. This means configuration of the Default config group. Example: curl --user admin:admin -H "X-Requested-By: ambari" -X GET http://localhost:8080/api/v1/clusters/c1/services/HDFS/components/HDFS_CLIENT?format=client_config_tar -o output.tar.gz Downloaded Tarball: tar tvf output.tar.gz
drwx------ root/root 0 2017-01-03 21:47 ./
-rw-r--r-- root/root 6639 2017-01-03 21:47 hdfs-site.xml -rw-r--r-- root/root 2526 2017-01-03 21:47 core-site.xml -rw-r--r-- root/root 5740 2017-01-03 21:47 hadoop-env.sh -rw-r--r-- root/root 6764 2017-01-03 21:47 log4j.properties UI Interface: 2. Download client configuration for a host component. (supported since Ambari-1.7.0) This API downloads configs of the client component on a specific host. Note that ambari supports host overrides via config groups feature and so configuration of a client component on a specific host can be different from the configurations of same client components on other hosts. This API downloads configs of the host group that the host component belongs meaning the actual in effect configuration of the client on that host will be downloaded Example: curl --user admin:admin -H "X-Requested-By: ambari" -X GET http://localhost:8080/api/v1/clusters/c1/hosts/c6401.ambari.apache.org/host_components/HDFS_CLIENT?format=client_config_tar -o output.tar.gz Downloaded Tarball: tar tvf output.tar.gz drwx------ root/root 0 2017-01-03 22:16 ./ -rw-r--r-- root/root 6639 2017-01-03 22:16 hdfs-site.xml -rw-r--r-- root/root 2526 2017-01-03 22:16 core-site.xml -rw-r--r-- root/root 5740 2017-01-03 22:16 hadoop-env.sh -rw-r--r-- root/root 6764 2017-01-03 22:16 log4j.properties UI Interface: 3. Download client configuration for a service. (supported from Ambari-2.5.0) This API downloads base configs (Default config group) for all client components of a service. Example: curl --user admin:admin -H "X-Requested-By: ambari" -X GET http://localhost:8080/api/v1/clusters/c1/services/HIVE/components?format=client_config_tar -o output.tar.gz Downloaded Tarball: tar tvf output.tar.gz drwx------ root/root 0 2017-01-03 23:53 HIVE_CLIENT/./ -rw-r--r-- root/root 19226 2017-01-03 23:53 HIVE_CLIENT/hive-site.xml -rw-r--r-- root/root 2148 2017-01-03 23:53 HIVE_CLIENT/hive-env.sh -rw-r--r-- root/root 3050 2017-01-03 23:53 HIVE_CLIENT/hive-log4j.properties -rw-r--r-- root/root 2652 2017-01-03 23:53 HIVE_CLIENT/hive-exec-log4j.properties drwx------ root/root 0 2017-01-03 23:53 HCAT/./ -rw-r--r-- root/root 1275 2017-01-03 23:53 HCAT/hcat-env.sh UI Interface: 4. Download client configuration of a host. (supported from Ambari-2.5.0) This API downloads effective configurations of all client components installed on a specific host. Example: curl --user admin:admin -H "X-Requested-By: ambari" -X GET http://localhost:8080/api/v1/clusters/c1/hosts/c6401.ambari.apache.org/host_components?format=client_config_tar -o output.tar.gz Downloaded Tarball: tar tvf output.tar.gz drwx------ root/root 0 2017-01-03 23:42 YARN_CLIENT/./ -rw-r--r-- root/root 2135 2017-01-03 23:42 YARN_CLIENT/capacity-scheduler.xml -rw-r--r-- root/root 5309 2017-01-03 23:42 YARN_CLIENT/yarn-env.sh -rw-r--r-- root/root 2996 2017-01-03 23:42 YARN_CLIENT/core-site.xml -rw-r--r-- root/root 16318 2017-01-03 23:42 YARN_CLIENT/yarn-site.xml -rw-r--r-- root/root 10396 2017-01-03 23:42 YARN_CLIENT/log4j.properties drwx------ root/root 0 2017-01-03 23:42 HDFS_CLIENT/./ -rw-r--r-- root/root 6639 2017-01-03 23:42 HDFS_CLIENT/hdfs-site.xml -rw-r--r-- root/root 2996 2017-01-03 23:42 HDFS_CLIENT/core-site.xml -rw-r--r-- root/root 5740 2017-01-03 23:42 HDFS_CLIENT/hadoop-env.sh -rw-r--r-- root/root 10396 2017-01-03 23:42 HDFS_CLIENT/log4j.properties UI Interface: 5. Download client configuration of a cluster. (supported from Ambari-2.5.0) This API downloads base configurations (Default config group) of all client service components present in a cluster. Example: curl --user admin:admin -H "X-Requested-By: ambari" -X GET http://localhost:8080/api/v1/clusters/c1/components?format=client_config_tar -o output.tar.gz Downloaded Tarball: tar tvf output.tar.gz drwx------ root/root 0 2017-01-03 23:54 ZOOKEEPER_CLIENT/./ -rw-r--r-- root/root 2477 2017-01-03 23:54 ZOOKEEPER_CLIENT/log4j.properties -rw-r--r-- root/root 310 2017-01-03 23:54 ZOOKEEPER_CLIENT/zookeeper-env.sh drwx------ root/root 0 2017-01-03 23:54 YARN_CLIENT/./ -rw-r--r-- root/root 2135 2017-01-03 23:54 YARN_CLIENT/capacity-scheduler.xml -rw-r--r-- root/root 5309 2017-01-03 23:54 YARN_CLIENT/yarn-env.sh -rw-r--r-- root/root 2996 2017-01-03 23:54 YARN_CLIENT/core-site.xml -rw-r--r-- root/root 16318 2017-01-03 23:54 YARN_CLIENT/yarn-site.xml -rw-r--r-- root/root 10396 2017-01-03 23:54 YARN_CLIENT/log4j.properties drwx------ root/root 0 2017-01-03 23:54 HDFS_CLIENT/./ -rw-r--r-- root/root 6639 2017-01-03 23:54 HDFS_CLIENT/hdfs-site.xml -rw-r--r-- root/root 2996 2017-01-03 23:54 HDFS_CLIENT/core-site.xml -rw-r--r-- root/root 5740 2017-01-03 23:54 HDFS_CLIENT/hadoop-env.sh -rw-r--r-- root/root 10396 2017-01-03 23:54 HDFS_CLIENT/log4j.properties drwx------ root/root 0 2017-01-03 23:54 MAPREDUCE2_CLIENT/./ -rw-r--r-- root/root 2996 2017-01-03 23:54 MAPREDUCE2_CLIENT/core-site.xml -rw-r--r-- root/root 868 2017-01-03 23:54 MAPREDUCE2_CLIENT/mapred-env.sh -rw-r--r-- root/root 6754 2017-01-03 23:54 MAPREDUCE2_CLIENT/mapred-site.xml UI Interface:
... View more
Labels:
01-03-2017
08:05 PM
@Indrek Mäestu Please take a backup of your db before doing any further database operations and also confirm that the cluster is on 2.5.0.0-1245 version for HDP-2.5.0.0 stack. This can be done by executing following query and getting same result as printed below: ambari=> select cluster_version.state,repo_version.version,repo_version.display_name from cluster_version full outer join repo_version on cluster_version.repo_version_id=repo_version.repo_version_id where cluster_version.state='CURRENT' ORDER BY repo_version.version;
state | version | display_name
---------+--------------+--------------
CURRENT | 2.5.0.0-1245 | HDP-2.5.0.0
(1 row)
Also you can check same for host version table to see that all hosts are in 'CURRENT' state and on version 2.5.0.0-1245 for HDP-2.5.0.0 stack. ambari=> select host_version.host_id,host_version.state,repo_version.version,repo_version.display_name from host_version full outer join repo_version on host_version.repo_version_id=repo_version.repo_version_id ORDER BY repo_version.version;
host_id | state | version | display_name
---------+---------+--------------+--------------
1 | CURRENT | 2.5.0.0-1245 | HDP-2.5.0.0
(1 row)
I believe it should be safe to do cacading delete operation if all hosts and the cluster are on the latest version. But it will be good if Nate Cole who has more context to this part of ambari code can confirm and comment that deleting a row is the best course of action that can be done from here. cc @Nate
... View more
01-03-2017
10:59 AM
@Sami Ahmad Just to update and follow up on this issue: https://issues.apache.org/jira/browse/AMBARI-19287 has been fixed in the current version being developed for ambari (2.5.0) and so similar confusion will not happen in the future. Thanks for bringing this issue to our notice!
... View more
01-03-2017
10:51 AM
As per the error in the attached ambari-server log, it seems to be the case of multiple registered repo versions with same stack name and version. can you verify if thats indeed the case by running following API and checking that there are more than one entry with same stack name (HDP) and version (2.5). http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/stack_versions please comment the output of above API. If it also gives 500 server error then please check in ambari database for following columns of repo_version table if there are any duplicates for composite of stack_id and version.This can be done by executing below query: select repo_version_id,stack_id,version,display_name from repo_version;
... View more
01-03-2017
10:22 AM
@Indrek Mäestu In addition to providing information asked above by @Sagar Shimpi, can you please check the output for stack version API call. Example stated below: {
"href" : "http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/stack_versions?ClusterStackVersions/version=2.5&fields=ClusterStackVersions/state",
"items" : [
{
"href" : "http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/stack_versions/1",
"ClusterStackVersions" : {
"cluster_name" : "c1",
"id" : 1,
"repository_version" : 1,
"stack" : "HDP",
"state" : "CURRENT",
"version" : "2.5"
}
}
]
}
Please verify that the state should be CURRENT. If it is not then some host is yet not publishing it's version as HDP-2.5 which can cause this issue of Admin stack version to hang If that's the case then you can identify the hosts that are not publishing their versions as 2.5 by following API call: http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/stack_versions?ClusterStackVersions/version=2.5&fields=ClusterStackVersions/* if this is not the case then we need to look at browser's devtool console for any JS error (highlights in red color) and network tab to debug this issue further.
... View more
12-29-2016
06:31 AM
1 Kudo
@Qinglin Xia This can be achieved with single Ambari REST API as @yusaku suggested. Following is the actual working API {
"href" : "http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/components/HBASE_MASTER?host_components/metrics/hbase/master/IsActiveMaster=true&fields=host_components/HostRoles/host_name",
"ServiceComponentInfo" : {
"cluster_name" : "c1",
"component_name" : "HBASE_MASTER",
"service_name" : "HBASE"
},
"host_components" : [
{
"href" : "http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/hosts/c6401.ambari.apache.org/host_components/HBASE_MASTER",
"HostRoles" : {
"cluster_name" : "c1",
"component_name" : "HBASE_MASTER",
"host_name" : "c6401.ambari.apache.org"
},
"metrics" : {
"hbase" : {
"master" : {
"IsActiveMaster" : "true"
}
}
}
}
]
}
... View more
12-28-2016
10:38 PM
Hi @Sami Ahmad As per the attached screenshot you are getting error in the "Check Kerberos" action from one of the randomly selected client host by the ambari. It seems that install kerberos client went ok on all nodes. Its the kerberos service check that failed and so I believed issue was not on any particular client host. In this case if ambari would have selected any other kerberos client host to do service check, then the action would have failed even on that host.
... View more