Member since
10-09-2015
62
Posts
48
Kudos Received
14
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1850 | 09-07-2017 07:15 PM | |
1786 | 06-26-2017 08:24 PM | |
1332 | 02-03-2017 08:21 PM | |
1319 | 01-28-2017 12:32 AM | |
1015 | 01-12-2017 07:39 PM |
02-06-2018
08:17 PM
@Michael Bronson Ambari API dos not have a support to return boolean true or false as result for the asked question. The nearest information that you can get from ambari API is as below and then you will need to have small client side logic to calculate the boolean 1. Ambari API to return list of all components in the cluster that are down http://localhost:8080/api/v1/clusters/c4/components?(ServiceComponentInfo/init_count!=0|ServiceComponentInfo/install_failed_count!=0|ServiceComponentInfo/installed_count!=0)&ServiceComponentInfo/category!=CLIENT&minimal_response=true The response to above APIs will look like as below and implies that some instances of DataNode and NodeManager are down. We can ofcourse also know which one specifically are down but it seems as per the question that it is not of interest. {items: [ { ServiceComponentInfo: { component_name: "DATANODE" } }, { ServiceComponentInfo: { component_name: "NODEMANAGER" } } ]} 2. Ambari API to return list of all components of a specific service (HDFS in below example) that are down http://localhost:8080/api/v1/clusters/c4/components?(ServiceComponentInfo/init_count!=0|ServiceComponentInfo/install_failed_count!=0|ServiceComponentInfo/installed_count!=0)&ServiceComponentInfo/category!=CLIENT&ServiceComponentInfo/service_name=HDFS&minimal_response=true Note: Replace localhost with ambari server host name
... View more
01-09-2018
09:53 PM
1 Kudo
@Joshua Connelly Thanks for reporting this issue. From the description, this issue looks to be an ambari bug. can you please create Apache Ambari Jira for this ? Let us know on this thread when you create bug. We will look into it and address it in the next ambari release. Also you mentioned in your description that this issue was noticed on HDP-2.6.2.14 and the tag to this question is of ambari-2.2.0 But ambari-2.2.0 does not support HDP-2.6.2.14. can you please verify these version related information for both HDP and Ambari ? For now, to get around this issue, you will need to edit a file on ambari-server host at location /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/stack_advisor.py. Comment out config items for now in this file at code link Ambari will then not recommend any changes to these configs whenever you attempt to change any other configs on your cluster
... View more
11-29-2017
07:55 PM
1 Kudo
Thanks for the information @Amine ZITOUNI I was able to confirm that this is an existing issue in ambari since version 2.2.2 When Namenode HA is enabled and the default port for namenode is changed, quicklinks does not reflect the changed port Please report this as ambari bug at https://issues.apache.org/jira/projects/AMBARI and inform us on this thread. We will look to address this issue in upcoming version of ambari
... View more
11-27-2017
06:50 PM
@Amine ZITOUNI can you please let us know the release version of Apache Ambari that you are using ?
... View more
09-07-2017
08:01 PM
1 Kudo
@Shishir Saxena As per your description, This looks to me most likely case of some JavaScript error in ambari-web (bug). if you can let us know what version of ambari you are using and the JS error trace that you are hitting, it will help us debug further in the issue and verify if it's a known issue or not. you can fetch JS error by using google chrome and executing same steps. Before you click next, please open developers tool of chrome and look for console tab. you can also filter the level to Error, so other info level information is not dumped and you are only presented with JS errors. Also clicking on that error should navigate to the code that is throwing that error.
... View more
09-07-2017
07:15 PM
3 Kudos
@Chingiz Akmatov spark-env.sh is ambari managed file and expose on UI. So you can make changes to it from Amabri UI. To do so Go to Spark2->Configs->Advanced spark2-env section->content property and add the env variable
... View more
09-07-2017
06:22 PM
1 Kudo
@suresh krish Ambari APIs responses only in Json format. if you need an xml file for a specific service of a specific HDP stack by which you can get service's version then it would be metainfo.xml file for that service. you can fetch such service's metainfo.xml from ambari-server host at location /var/lib/ambari-server/resources/stacks/HDP/$StackVersion/services/$serviceName/metainfo.xml Please substitute $StackVersion and $serviceName with actual desired values. For example: /var/lib/ambari-server/resources/stacks/HDP/2.6/services/HDFS/metainfo.xml The same can be browsed on github at link
... View more
07-07-2017
05:14 PM
1 Kudo
@William Brooks In the widget context, only service page widgets are persisted in ambari db, their definition is REST API driven and So it allows to add user defined widgets. It has "Create Widget" wizard that allows user to do so. Dashboard page widgets and host page widgets and their definition is still completely coded in ambari-web JS code. so to add any new widget as of now, new widget definition code needs to be added in ambari-web JS code. Pointers to the host page widget definition code: https://github.com/apache/ambari/tree/trunk/ambari-web/app/views/main/host/metrics
... View more
06-27-2017
06:28 AM
@David Pocivalnik It is ok to have service that only has slave. Flume service is a working example for that: https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/FLUME/1.4.0.2.0/metainfo.xml There seems to be something else in the custom service that you defined which might be causing this issue. Does the new custom service has any configurations ? can you also post zip file for the custom hbase service that you are introducing ?
... View more
06-26-2017
08:25 PM
@David Pocivalnik every time recommendation API call is made, new entry for logs is made at path "/var/run/ambari-server/stack-recommendations/" on ambari-server host. there will be several and if you are not certain about the last entry then please move all historical existing entries to different location and then again go through add service wizard to produce a repro. Entry will have stackadvisor.err and stackadvisor.out files. Please refer them for further debugging the issue. If it's not clear from logs, please compress the directory and post it on this thread, So that we can get more information for debugging purpose.
... View more
06-26-2017
08:24 PM
@David Pocivalnik every time recommendation API call is made, new entry for logs is made at path "/var/run/ambari-server/stack-recommendations/" on ambari-server host. there will be several and if you are not certain about the last entry then please move all historical existing entries to different location and then again go through add service wizard to produce a repro. Entry will have stackadvisor.err and stackadvisor.out files. Please refer them for further debugging the issue. If it's not clear from logs, please compress the directory and post it on this thread, So that we can get more information for debugging purpose.
... View more
02-27-2017
05:35 PM
2 Kudos
@Sachin Ambardekar From HDFS perspective, in some rare circumstances it was noticed that secondary (or standby) namenode fails to consume edit log. This results in more complicated situations if active namenode is restarted meanwhile (unconsumed edit logs will have to be ignored). The simpler solution to handle such scenario more gracefully is to always make sure that fsimage is updated before stopping namenode. So as precautionary measure work was done in Ambari to check and warn user if user tries to stop NameNode that has a checkpoint older than 12 hours. [1] HDFS-3.0.0.0 has implemented this check natively and going forward Ambari might skip this warning. [2] Following Jira's and their description are used as references for this answer: [1] https://issues.apache.org/jira/browse/AMBARI-12951 [2] https://issues.apache.org/jira/browse/HDFS-6353
... View more
02-10-2017
11:38 AM
1 Kudo
@will chen you can add spark johistoryserver back by using ambari-web UI: Go to the host details page. This can be done by clicking on the hosts tab and clicking on the host where you want to install spark jobhistoryserver Click on Add button. It should show list of components that can be added to the host. Click on spark jobhistoryserver. please see the attached image add-component.png If for any reason you are not able to add it from UI or you want to use ambari-server REST APIs to do so then article at link has a section "Step 5 - Create host components" which can be used to add host components. another question: I cannot find SPARK_THRIFTSERVER in /var/lib/ambari-server/resources/common-services/SPARK/1.2.0.2.2/metainfo.xml. Why does it exists in SPARK service?
Thriftserver support for spark was added first in spark 1.4.1 definition in ambari (link). It will be defined at /var/lib/ambari-server/resources/common-services/SPARK/1.4.1/metainfo.xml.
... View more
02-07-2017
11:27 PM
@Meng Meng Glad to know that. Please accept the above answer when you get chance, so it can be noted as correct answer to this issue.
... View more
02-06-2017
07:18 PM
@Yurii Tymchii In addition to what @rnettleton suggested, please also share detailed error that will be logged when the API call resulting in 500 server error is made. ambari-server log is located at /var/log/ambari-server/ambari-server.log location on the host. This will help us to understand the cause of the failure.
... View more
02-05-2017
09:49 PM
@Ben Weintraub I assume that your question is in context of ui driven service deployment (ambari-web installer or add service wizard). There is no such property in released versions of ambari that will make HDFS/SECONDARY_NAMENODE the default node for custom service will be installed on. Although as I said in my previous comment that service developer can always add python code to achieve such requirements in custom service's service advisor. Also in the ambari trunk some work has already been done to achieve similar requirement in decalarative manner rather than writing python code. This has been done via AMBARI-19685. It's associated reviewboard (link) has the images of how ui will react when host scope dependency is added for some component. Although this is not exactly what you are looking for, it will help your use-case. If MY_SERVICE/MY_COMPONENT is not hosted on HDFS/SECONDARY_NAMENODE then user will be prompted with the error message before they move forward. This is a validation and I understand that you are more looking for default recommendation, so that by default MY_SERVICE/MY_COMPONENT is hosted on HDFS/SECONDARY_NAMENODE when user lands on "Assign Masters" page for the first time and if user overrides the default presented layout, then in that case AMBARI-19685 validation work should show the message for component level dependency violation. Please report your requirement as a task at apache ambari Jira and let us know over here. I will follow up on the reported jira and try to add this functionality as part of being developed Ambari-3.0.0 release.
... View more
02-03-2017
10:25 PM
1 Kudo
@Ben Weintraub Adding host scope dependency at component level will make sure that the dependent component is co-hosted with the needed one. This is done by default in blueprint based deployment. Post ambari-2.5.0 release (trunk of ambari), work has been done to make sure that UI deployments also forces users to co-host required components as marked in the stack. Although user can choose to deploy MY_SERVICE/MY_COMPONENT on some additional hosts as well in either blueprint or ui based deployment which will be permitted. If you strictly want to restrict user to have MY_SERVICE/MY_COMPONENT only on HDFS/SECONDARY_NAMENODE than you can add that logic as python code in service advisor of the custom service. HAWQ service had some specific requirements for which they added the logic in it's service advisor. you can refer it as an example: https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HAWQ/2.0.0/service_advisor.py#L100-L121
... View more
02-03-2017
08:21 PM
2 Kudos
Hi @Meng Meng Ambari UI si completely driven from Ambari REST API API call to create a new YARN config group and add a host from default group to it: curl 'http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/config_groups' -u admin:admin -H "X-Requested-By: ambari" -i -X POST --data '[{"ConfigGroup":{"group_name":"custom_group","tag":"YARN","description":"This is a custom group","desired_configs":[],"hosts":[{"host_name":"c6401.ambari.apache.org"}]}}]'
But if a host does not belong to Default config group and belongs to any other custom group and is desired to be moved to another custom config group then first it needs to be removed from prior custom config group and then be added to the desired new custom config group. The same API (use PUT instead of POST method) as mentioned above can be used to first remove it from the previous config group curl 'http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/config_groups/2' -u admin:admin -H "X-Requested-By: ambari" -i -X PUT --data '{"ConfigGroup":{"group_name":"custom_group","description":"This is a custom group","tag":"YARN","hosts":[],"desired_configs":[]}}' --compressed Note: In above mentioned APIs, you need to change ambari-server hostname, cluster name, group_name, description and tag (service name) as applicable in your environment
... View more
01-31-2017
07:19 PM
@Sami Ahmad can you let us know the value of the property when yarn RM HA was done. This can be achieved by using the API is described in my last comment http://localhost:8080/api/v1/clusters/c1/configurations/service_config_versions?service_name=YARN&service_config_version_note=This%20configuration%20is%20created%20by%20Enable%20ResourceManager%20HA%20wizard I am trying to understand if that yarn property was set to true when RAM HA was done for the very first time and then later got reverted to false OR value of that property was false even when RM HA was completed We can know that by looking into the property value by using above API as that service config version is created when YARN RM HA is completed. Also let us know ambari version. It allows us to look into the correct version of ambari code and verify if it's a bug specific to the ambari version that you are using or not.
... View more
01-28-2017
12:32 AM
@Sami Ahmad 1. can you please let us know the version of ambari that you are using ? 2. Also can you check the current value for the same property (yarn.resourcemanager.ha.enabled) on YARN service config page in Ambari ? 3. Can you view the config version that enabled RM HA and check the same property's value ? you can do this from UI and API. UI: enable-rm-ha.png API: http://localhost:8080/api/v1/clusters/c1/configurations/service_config_versions?service_name=YARN&service_config_version_note=This%20configuration%20is%20created%20by%20Enable%20ResourceManager%20HA%20wizard Once you get the response from API, do a lookup for the property value
... View more
01-17-2017
10:56 PM
1 Kudo
@Sami Ahmad ambari-web uses below pointed code to determine enabling of next button: https://github.com/apache/ambari/blob/release-2.4.0/ambari-web/app/controllers/main/admin/highAvailability/nameNode/step4_controller.js#L45-L55 Following API call will help to debug further the reason behind this issue: http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/hosts/{NAMENODE_HOSTNAME}/host_components/NAMENODE?fields=metrics/dfs/namenode/Safemode,metrics/dfs/namenode/JournalTransactionInfo Value of Safemode and JournalTransactionInfo should not be empty. value of LastAppliedOrWrittenTxId - value of MostRecentCheckpointTxId should be less than or equal to 1
... View more
01-12-2017
07:39 PM
1 Kudo
@Shashant Panwar you will also need to clean alerts and service config tables: Some of them are as followed: serviceconfig serviceconfigmapping alert_definition alert_current alert_group alert_history I will rather encourage to resolve the issue of showing Mahout in the UI and then delete service from UI (if using ambari-2.4.0 or above) or from Ambari API Can you let us know what version of ambari are you using ? Also what is the content of clusterservices and servicedesiredstate tables specifically if Mahout service entry is present or not?
... View more
01-11-2017
08:44 PM
1 Kudo
@Timothy Spann you can definitely use ambari granular APIs to delete and add back Hive interactive host on another host. But Hive interactive requires other clients (Slider,Tez, etc) as well to work. so you need to also add them on new host. Rather I will suggest easiest way would be from UI is to disable interactive query (interactive-query.png) and save the settings (this automatically stops and deletes Hive Server Interactive from existing host), wait for all background operations to be completed and then enable it again . While enabling back interactive query, user will be asked to choose the HS2 interactive host and when the settings are saved, selected host will have HS2 interactive installed along with other dependent clients installation. yarn queue should be automatically refreshed and HS2 will be running on new host Also at anytime you can go to hive service summary page and hover over HS2 interactive to know what host it is presently installed on (hs2-interactive-hostname.png). This does not require clicking and actually navigating to HS2 interactive host to identify HS2 Interactive hostname.
... View more
01-11-2017
07:40 PM
@Sanket Korgaonkar
I am glad that the suggestion helped you. Please accept the answer. So it can be a valid verified reference for other people who encounter similar problem in future.
... View more
01-11-2017
07:19 PM
1 Kudo
@Sanket Korgaonkar It seems that your API semantics are correct but there is a typo in the service name and component name in the API call. It should be "HBASE_CLIENT" instead of "Hbase_Client" and "HBASE" instead "Hbase" Can you retry after making that change and let us know if that worked for you. Below is the API call with the suggested changes. Please replace cluster name and host names as needed curl -u admin:admin -H 'X-Requested-By: ambari' -X POST -d '{ "RequestInfo":{ "command":"RESTART", "context":"Restart HBase client on c6401.ambari.apache.org", "operation_level":{ "level":"HOST", "cluster_name":"c1" } }, "Requests/resource_filters":[ { "service_name":"HBASE", "component_name":"HBASE_CLIENT", "hosts":"c6401.ambari.apache.org" } ] }' 'http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/requests'
... View more
01-04-2017
08:59 PM
4 Kudos
The feature of being able to download client configs after cluster deployment was initially added to Ambari in 1.7.0 version. This feature helps Hadoop admin in following known use cases: This can be helpful if Hadoop admin or some other developer wants to setup a client host that is not being managed by ambari. Ambari as of now (2.5.0 version) does not support multiple cluster and if user environment has multiple clusters with same set of client hosts that needs to be configured to interact with all clusters, then these hosts cannot be added to any one particular ambari cluster and needs to be managed outside of the ambari. In this scenario being able to download all client configs can be a helpful feature to set up these client hosts. Further this feature can also be helpful if user wants to create automation scripts for hadoop application running on an ambari managed cluster without using any existing node of the ambari cluster for running automation code. This will require to do setup of the application which will also include hadoop clients set up on automation dedicated hosts. As stated earlier this feature was added first in Ambari-1.7.0. This feature has been further enhanced with newer APIs in Ambari-2.5.0. Let's go through each supported API related to downloading client configurations. 1. Download client configuration for a service component. (supported since Ambari-1.7.0) This API downloads base configs for the client component. This means configuration of the Default config group. Example: curl --user admin:admin -H "X-Requested-By: ambari" -X GET http://localhost:8080/api/v1/clusters/c1/services/HDFS/components/HDFS_CLIENT?format=client_config_tar -o output.tar.gz Downloaded Tarball: tar tvf output.tar.gz
drwx------ root/root 0 2017-01-03 21:47 ./
-rw-r--r-- root/root 6639 2017-01-03 21:47 hdfs-site.xml -rw-r--r-- root/root 2526 2017-01-03 21:47 core-site.xml -rw-r--r-- root/root 5740 2017-01-03 21:47 hadoop-env.sh -rw-r--r-- root/root 6764 2017-01-03 21:47 log4j.properties UI Interface: 2. Download client configuration for a host component. (supported since Ambari-1.7.0) This API downloads configs of the client component on a specific host. Note that ambari supports host overrides via config groups feature and so configuration of a client component on a specific host can be different from the configurations of same client components on other hosts. This API downloads configs of the host group that the host component belongs meaning the actual in effect configuration of the client on that host will be downloaded Example: curl --user admin:admin -H "X-Requested-By: ambari" -X GET http://localhost:8080/api/v1/clusters/c1/hosts/c6401.ambari.apache.org/host_components/HDFS_CLIENT?format=client_config_tar -o output.tar.gz Downloaded Tarball: tar tvf output.tar.gz drwx------ root/root 0 2017-01-03 22:16 ./ -rw-r--r-- root/root 6639 2017-01-03 22:16 hdfs-site.xml -rw-r--r-- root/root 2526 2017-01-03 22:16 core-site.xml -rw-r--r-- root/root 5740 2017-01-03 22:16 hadoop-env.sh -rw-r--r-- root/root 6764 2017-01-03 22:16 log4j.properties UI Interface: 3. Download client configuration for a service. (supported from Ambari-2.5.0) This API downloads base configs (Default config group) for all client components of a service. Example: curl --user admin:admin -H "X-Requested-By: ambari" -X GET http://localhost:8080/api/v1/clusters/c1/services/HIVE/components?format=client_config_tar -o output.tar.gz Downloaded Tarball: tar tvf output.tar.gz drwx------ root/root 0 2017-01-03 23:53 HIVE_CLIENT/./ -rw-r--r-- root/root 19226 2017-01-03 23:53 HIVE_CLIENT/hive-site.xml -rw-r--r-- root/root 2148 2017-01-03 23:53 HIVE_CLIENT/hive-env.sh -rw-r--r-- root/root 3050 2017-01-03 23:53 HIVE_CLIENT/hive-log4j.properties -rw-r--r-- root/root 2652 2017-01-03 23:53 HIVE_CLIENT/hive-exec-log4j.properties drwx------ root/root 0 2017-01-03 23:53 HCAT/./ -rw-r--r-- root/root 1275 2017-01-03 23:53 HCAT/hcat-env.sh UI Interface: 4. Download client configuration of a host. (supported from Ambari-2.5.0) This API downloads effective configurations of all client components installed on a specific host. Example: curl --user admin:admin -H "X-Requested-By: ambari" -X GET http://localhost:8080/api/v1/clusters/c1/hosts/c6401.ambari.apache.org/host_components?format=client_config_tar -o output.tar.gz Downloaded Tarball: tar tvf output.tar.gz drwx------ root/root 0 2017-01-03 23:42 YARN_CLIENT/./ -rw-r--r-- root/root 2135 2017-01-03 23:42 YARN_CLIENT/capacity-scheduler.xml -rw-r--r-- root/root 5309 2017-01-03 23:42 YARN_CLIENT/yarn-env.sh -rw-r--r-- root/root 2996 2017-01-03 23:42 YARN_CLIENT/core-site.xml -rw-r--r-- root/root 16318 2017-01-03 23:42 YARN_CLIENT/yarn-site.xml -rw-r--r-- root/root 10396 2017-01-03 23:42 YARN_CLIENT/log4j.properties drwx------ root/root 0 2017-01-03 23:42 HDFS_CLIENT/./ -rw-r--r-- root/root 6639 2017-01-03 23:42 HDFS_CLIENT/hdfs-site.xml -rw-r--r-- root/root 2996 2017-01-03 23:42 HDFS_CLIENT/core-site.xml -rw-r--r-- root/root 5740 2017-01-03 23:42 HDFS_CLIENT/hadoop-env.sh -rw-r--r-- root/root 10396 2017-01-03 23:42 HDFS_CLIENT/log4j.properties UI Interface: 5. Download client configuration of a cluster. (supported from Ambari-2.5.0) This API downloads base configurations (Default config group) of all client service components present in a cluster. Example: curl --user admin:admin -H "X-Requested-By: ambari" -X GET http://localhost:8080/api/v1/clusters/c1/components?format=client_config_tar -o output.tar.gz Downloaded Tarball: tar tvf output.tar.gz drwx------ root/root 0 2017-01-03 23:54 ZOOKEEPER_CLIENT/./ -rw-r--r-- root/root 2477 2017-01-03 23:54 ZOOKEEPER_CLIENT/log4j.properties -rw-r--r-- root/root 310 2017-01-03 23:54 ZOOKEEPER_CLIENT/zookeeper-env.sh drwx------ root/root 0 2017-01-03 23:54 YARN_CLIENT/./ -rw-r--r-- root/root 2135 2017-01-03 23:54 YARN_CLIENT/capacity-scheduler.xml -rw-r--r-- root/root 5309 2017-01-03 23:54 YARN_CLIENT/yarn-env.sh -rw-r--r-- root/root 2996 2017-01-03 23:54 YARN_CLIENT/core-site.xml -rw-r--r-- root/root 16318 2017-01-03 23:54 YARN_CLIENT/yarn-site.xml -rw-r--r-- root/root 10396 2017-01-03 23:54 YARN_CLIENT/log4j.properties drwx------ root/root 0 2017-01-03 23:54 HDFS_CLIENT/./ -rw-r--r-- root/root 6639 2017-01-03 23:54 HDFS_CLIENT/hdfs-site.xml -rw-r--r-- root/root 2996 2017-01-03 23:54 HDFS_CLIENT/core-site.xml -rw-r--r-- root/root 5740 2017-01-03 23:54 HDFS_CLIENT/hadoop-env.sh -rw-r--r-- root/root 10396 2017-01-03 23:54 HDFS_CLIENT/log4j.properties drwx------ root/root 0 2017-01-03 23:54 MAPREDUCE2_CLIENT/./ -rw-r--r-- root/root 2996 2017-01-03 23:54 MAPREDUCE2_CLIENT/core-site.xml -rw-r--r-- root/root 868 2017-01-03 23:54 MAPREDUCE2_CLIENT/mapred-env.sh -rw-r--r-- root/root 6754 2017-01-03 23:54 MAPREDUCE2_CLIENT/mapred-site.xml UI Interface:
... View more
- Find more articles tagged with:
- Ambari
- ambari-server
- automation
- client
- configuration
- How-ToTutorial
- Sandbox & Learning
Labels:
01-03-2017
08:05 PM
@Indrek Mäestu Please take a backup of your db before doing any further database operations and also confirm that the cluster is on 2.5.0.0-1245 version for HDP-2.5.0.0 stack. This can be done by executing following query and getting same result as printed below: ambari=> select cluster_version.state,repo_version.version,repo_version.display_name from cluster_version full outer join repo_version on cluster_version.repo_version_id=repo_version.repo_version_id where cluster_version.state='CURRENT' ORDER BY repo_version.version;
state | version | display_name
---------+--------------+--------------
CURRENT | 2.5.0.0-1245 | HDP-2.5.0.0
(1 row)
Also you can check same for host version table to see that all hosts are in 'CURRENT' state and on version 2.5.0.0-1245 for HDP-2.5.0.0 stack. ambari=> select host_version.host_id,host_version.state,repo_version.version,repo_version.display_name from host_version full outer join repo_version on host_version.repo_version_id=repo_version.repo_version_id ORDER BY repo_version.version;
host_id | state | version | display_name
---------+---------+--------------+--------------
1 | CURRENT | 2.5.0.0-1245 | HDP-2.5.0.0
(1 row)
I believe it should be safe to do cacading delete operation if all hosts and the cluster are on the latest version. But it will be good if Nate Cole who has more context to this part of ambari code can confirm and comment that deleting a row is the best course of action that can be done from here. cc @Nate
... View more
01-03-2017
10:59 AM
@Sami Ahmad Just to update and follow up on this issue: https://issues.apache.org/jira/browse/AMBARI-19287 has been fixed in the current version being developed for ambari (2.5.0) and so similar confusion will not happen in the future. Thanks for bringing this issue to our notice!
... View more
01-03-2017
10:51 AM
As per the error in the attached ambari-server log, it seems to be the case of multiple registered repo versions with same stack name and version. can you verify if thats indeed the case by running following API and checking that there are more than one entry with same stack name (HDP) and version (2.5). http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/stack_versions please comment the output of above API. If it also gives 500 server error then please check in ambari database for following columns of repo_version table if there are any duplicates for composite of stack_id and version.This can be done by executing below query: select repo_version_id,stack_id,version,display_name from repo_version;
... View more
01-03-2017
10:22 AM
@Indrek Mäestu In addition to providing information asked above by @Sagar Shimpi, can you please check the output for stack version API call. Example stated below: {
"href" : "http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/stack_versions?ClusterStackVersions/version=2.5&fields=ClusterStackVersions/state",
"items" : [
{
"href" : "http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/stack_versions/1",
"ClusterStackVersions" : {
"cluster_name" : "c1",
"id" : 1,
"repository_version" : 1,
"stack" : "HDP",
"state" : "CURRENT",
"version" : "2.5"
}
}
]
}
Please verify that the state should be CURRENT. If it is not then some host is yet not publishing it's version as HDP-2.5 which can cause this issue of Admin stack version to hang If that's the case then you can identify the hosts that are not publishing their versions as 2.5 by following API call: http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/stack_versions?ClusterStackVersions/version=2.5&fields=ClusterStackVersions/* if this is not the case then we need to look at browser's devtool console for any JS error (highlights in red color) and network tab to debug this issue further.
... View more