- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Is there a way to execute Ambari service checks in bulk?
Created ‎01-21-2016 05:37 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I'd like Ambari to execute a service check for all installed components that are not in maintenance mode.
I couldn't find such an option in the UI so I tried the REST API. I ran the below command and got back an "Accepted" status, but when I look in Ambari UI for a list of executed background operations I only see a single service check when I expected two service checks.
curl -v -u $LOGIN:$PASSWORD -H "X-Requested-By:X-Requested-By" -X POST "http://$AMBARI_HOST:8080/api/v1/clusters/$cluster_name/requests" --data '[ {"RequestInfo":{"context":"HIVE Service Check","command":"HIVE_SERVICE_CHECK"},"Requests/resource_filters":[{"service_name":"HIVE"}]}, {"RequestInfo":{"context":"MAPREDUCE2 Service Check","command":"MAPREDUCE2_SERVICE_CHECK"},"Requests/resource_filters":[{"service_name":"MAPREDUCE2"}]} ]'
Created on ‎01-23-2016 08:42 AM - edited ‎08-19-2019 04:46 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
. @Vladimir Zlatkin I just found a way to execute all service checks with one call 🙂
To bulk start service checks we have to use the same API/method that is used to trigger a rolling restart of Datanodes. The "request_schedules" api starts all defined commands in the specified order, we can even specify a pause between the commands.
Start bulk Service checks:
curl -ivk -H "X-Requested-By: ambari" -u <user>:<password> -X POST -d @payload.json http://myexample.com:8080/api/v1/clusters/bigdata/request_schedules
Payload.json
[ { "RequestSchedule":{ "batch":[ { "requests":[ { "order_id":1, "type":"POST", "uri":"/api/v1/clusters/bigdata/requests", "RequestBodyInfo":{ "RequestInfo":{ "context":"HDFS Service Check (batch 1 of 11)", "command":"HDFS_SERVICE_CHECK" }, "Requests/resource_filters":[ { "service_name":"HDFS" } ] } }, { "order_id":2, "type":"POST", "uri":"/api/v1/clusters/bigdata/requests", "RequestBodyInfo":{ "RequestInfo":{ "context":"YARN Service Check (batch 2 of 11)", "command":"YARN_SERVICE_CHECK" }, "Requests/resource_filters":[ { "service_name":"YARN" } ] } }, { "order_id":3, "type":"POST", "uri":"/api/v1/clusters/bigdata/requests", "RequestBodyInfo":{ "RequestInfo":{ "context":"MapReduce Service Check (batch 3 of 11)", "command":"MAPREDUCE2_SERVICE_CHECK" }, "Requests/resource_filters":[ { "service_name":"MAPREDUCE2" } ] } }, { "order_id":4, "type":"POST", "uri":"/api/v1/clusters/bigdata/requests", "RequestBodyInfo":{ "RequestInfo":{ "context":"HBase Service Check (batch 4 of 11)", "command":"HBASE_SERVICE_CHECK" }, "Requests/resource_filters":[ { "service_name":"HBASE" } ] } },{ "order_id":5, "type":"POST", "uri":"/api/v1/clusters/bigdata/requests", "RequestBodyInfo":{ "RequestInfo":{ "context":"Hive Service Check (batch 5 of 11)", "command":"HIVE_SERVICE_CHECK" }, "Requests/resource_filters":[ { "service_name":"HIVE" } ] } }, { "order_id":6, "type":"POST", "uri":"/api/v1/clusters/bigdata/requests", "RequestBodyInfo":{ "RequestInfo":{ "context":"WebHCat Service Check (batch 6 of 11)", "command":"WEBHCAT_SERVICE_CHECK" }, "Requests/resource_filters":[ { "service_name":"WEBHCAT" } ] } }, { "order_id":7, "type":"POST", "uri":"/api/v1/clusters/bigdata/requests", "RequestBodyInfo":{ "RequestInfo":{ "context":"PIG Service Check (batch 7 of 11)", "command":"PIG_SERVICE_CHECK" }, "Requests/resource_filters":[ { "service_name":"PIG" } ] } }, { "order_id":8, "type":"POST", "uri":"/api/v1/clusters/bigdata/requests", "RequestBodyInfo":{ "RequestInfo":{ "context":"Falcon Service Check (batch 8 of 11)", "command":"FALCON_SERVICE_CHECK" }, "Requests/resource_filters":[ { "service_name":"FALCON" } ] } }, { "order_id":9, "type":"POST", "uri":"/api/v1/clusters/bigdata/requests", "RequestBodyInfo":{ "RequestInfo":{ "context":"Storm Service Check (batch 9 of 11)", "command":"STORM_SERVICE_CHECK" }, "Requests/resource_filters":[ { "service_name":"STORM" } ] } }, { "order_id":10, "type":"POST", "uri":"/api/v1/clusters/bigdata/requests", "RequestBodyInfo":{ "RequestInfo":{ "context":"Oozie Service Check (batch 10 of 11)", "command":"OOZIE_SERVICE_CHECK" }, "Requests/resource_filters":[ { "service_name":"OOZIE" } ] } }, { "order_id":11, "type":"POST", "uri":"/api/v1/clusters/bigdata/requests", "RequestBodyInfo":{ "RequestInfo":{ "context":"Zookeeper Service Check (batch 11 of 11)", "command":"ZOOKEEPER_QUORUM_SERVICE_CHECK" }, "Requests/resource_filters":[ { "service_name":"ZOOKEEPER" } ] } } ] }, { "batch_settings":{ "batch_separation_in_seconds":1, "task_failure_tolerance":1 } } ] } } ]
Result
This is returned by the api
{ "resources" : [ { "href" : "http://myexample.com:8080/api/v1/clusters/bigdata/request_schedules/68", "RequestSchedule" : { "id" : 68 } } ] }
Ambari operations
Created ‎01-22-2016 01:39 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Below is a sample script with out batching. It would be really nice if someone could figure out how to get Ambari to accept a bulk request for a Service Check, as described here
#!/usr/bin/env bash AMBARI_HOST=${1:-sandbox.hortonworks.com} LOGIN=admin PASSWORD=admin if [ -e "~/.ambari_login" ]; then . ~/.ambari_login fi cluster_name=$(curl -s -u $LOGIN:$PASSWORD "http://$AMBARI_HOST:8080/api/v1/clusters" | python -mjson.tool | perl -ne '/"cluster_name":.*?"(.*?)"/ && print "$1\n"') if [ -z "$cluster_name" ]; then exit fi echo "Got cluster name: $cluster_name" running_services=$(curl -s -u $LOGIN:$PASSWORD "http://$AMBARI_HOST:8080/api/v1/clusters/$cluster_name/services?fields=ServiceInfo/service_name&ServiceInfo/maintenance_state=OFF" | python -mjson.tool | perl -ne '/"service_name":.*?"(.*?)"/ && print "$1\n"') if [ -z "$running_services" ]; then exit fi echo "Got running services: $running_services" post_body= for s in $running_services; do if [ "$s" == "ZOOKEEPER" ]; then post_body="{\"RequestInfo\":{\"context\":\"$s Service Check\",\"command\":\"${s}_QUORUM_SERVICE_CHECK\"},\"Requests/resource_filters\":[{\"service_name\":\"$s\"}]}" else post_body="{\"RequestInfo\":{\"context\":\"$s Service Check\",\"command\":\"${s}_SERVICE_CHECK\"},\"Requests/resource_filters\":[{\"service_name\":\"$s\"}]}" fi curl -s -u $LOGIN:$PASSWORD -H "X-Requested-By:X-Requested-By" -X POST --data "$post_body" "http://$AMBARI_HOST:8080/api/v1/clusters/$cluster_name/requests" done
Created ‎07-17-2016 08:11 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Vlad, do we need to run this service script on admin account?
Created on ‎01-23-2016 08:42 AM - edited ‎08-19-2019 04:46 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
. @Vladimir Zlatkin I just found a way to execute all service checks with one call 🙂
To bulk start service checks we have to use the same API/method that is used to trigger a rolling restart of Datanodes. The "request_schedules" api starts all defined commands in the specified order, we can even specify a pause between the commands.
Start bulk Service checks:
curl -ivk -H "X-Requested-By: ambari" -u <user>:<password> -X POST -d @payload.json http://myexample.com:8080/api/v1/clusters/bigdata/request_schedules
Payload.json
[ { "RequestSchedule":{ "batch":[ { "requests":[ { "order_id":1, "type":"POST", "uri":"/api/v1/clusters/bigdata/requests", "RequestBodyInfo":{ "RequestInfo":{ "context":"HDFS Service Check (batch 1 of 11)", "command":"HDFS_SERVICE_CHECK" }, "Requests/resource_filters":[ { "service_name":"HDFS" } ] } }, { "order_id":2, "type":"POST", "uri":"/api/v1/clusters/bigdata/requests", "RequestBodyInfo":{ "RequestInfo":{ "context":"YARN Service Check (batch 2 of 11)", "command":"YARN_SERVICE_CHECK" }, "Requests/resource_filters":[ { "service_name":"YARN" } ] } }, { "order_id":3, "type":"POST", "uri":"/api/v1/clusters/bigdata/requests", "RequestBodyInfo":{ "RequestInfo":{ "context":"MapReduce Service Check (batch 3 of 11)", "command":"MAPREDUCE2_SERVICE_CHECK" }, "Requests/resource_filters":[ { "service_name":"MAPREDUCE2" } ] } }, { "order_id":4, "type":"POST", "uri":"/api/v1/clusters/bigdata/requests", "RequestBodyInfo":{ "RequestInfo":{ "context":"HBase Service Check (batch 4 of 11)", "command":"HBASE_SERVICE_CHECK" }, "Requests/resource_filters":[ { "service_name":"HBASE" } ] } },{ "order_id":5, "type":"POST", "uri":"/api/v1/clusters/bigdata/requests", "RequestBodyInfo":{ "RequestInfo":{ "context":"Hive Service Check (batch 5 of 11)", "command":"HIVE_SERVICE_CHECK" }, "Requests/resource_filters":[ { "service_name":"HIVE" } ] } }, { "order_id":6, "type":"POST", "uri":"/api/v1/clusters/bigdata/requests", "RequestBodyInfo":{ "RequestInfo":{ "context":"WebHCat Service Check (batch 6 of 11)", "command":"WEBHCAT_SERVICE_CHECK" }, "Requests/resource_filters":[ { "service_name":"WEBHCAT" } ] } }, { "order_id":7, "type":"POST", "uri":"/api/v1/clusters/bigdata/requests", "RequestBodyInfo":{ "RequestInfo":{ "context":"PIG Service Check (batch 7 of 11)", "command":"PIG_SERVICE_CHECK" }, "Requests/resource_filters":[ { "service_name":"PIG" } ] } }, { "order_id":8, "type":"POST", "uri":"/api/v1/clusters/bigdata/requests", "RequestBodyInfo":{ "RequestInfo":{ "context":"Falcon Service Check (batch 8 of 11)", "command":"FALCON_SERVICE_CHECK" }, "Requests/resource_filters":[ { "service_name":"FALCON" } ] } }, { "order_id":9, "type":"POST", "uri":"/api/v1/clusters/bigdata/requests", "RequestBodyInfo":{ "RequestInfo":{ "context":"Storm Service Check (batch 9 of 11)", "command":"STORM_SERVICE_CHECK" }, "Requests/resource_filters":[ { "service_name":"STORM" } ] } }, { "order_id":10, "type":"POST", "uri":"/api/v1/clusters/bigdata/requests", "RequestBodyInfo":{ "RequestInfo":{ "context":"Oozie Service Check (batch 10 of 11)", "command":"OOZIE_SERVICE_CHECK" }, "Requests/resource_filters":[ { "service_name":"OOZIE" } ] } }, { "order_id":11, "type":"POST", "uri":"/api/v1/clusters/bigdata/requests", "RequestBodyInfo":{ "RequestInfo":{ "context":"Zookeeper Service Check (batch 11 of 11)", "command":"ZOOKEEPER_QUORUM_SERVICE_CHECK" }, "Requests/resource_filters":[ { "service_name":"ZOOKEEPER" } ] } } ] }, { "batch_settings":{ "batch_separation_in_seconds":1, "task_failure_tolerance":1 } } ] } } ]
Result
This is returned by the api
{ "resources" : [ { "href" : "http://myexample.com:8080/api/v1/clusters/bigdata/request_schedules/68", "RequestSchedule" : { "id" : 68 } } ] }
Ambari operations
Created ‎01-23-2016 12:41 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Jonas Straub needs to be converted to article!
Created ‎01-26-2016 09:22 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Artem Ervits @Vladimir Zlatkin I have created an article for this solution. Please see this https://community.hortonworks.com/articles/11852/ambari-api-run-all-service-checks-bulk.html
I have also added 9 more Services to the payload, that should cover almost every service of the cluster now.
Created ‎09-09-2017 08:18 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks for the article.
Is there any way to figure out weather service check has passed or failed from API output(and not from Ambari GUI)?I'm getting the below output but not sure how to interpret the request output.
API o/p :
{ "href" : "http://<host ip>:8080/api/v1/clusters/DEMO/requests/11", "Requests" : { "id" : 11, "status" : "Accepted" }
Created ‎07-17-2016 06:46 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@vijayakumar Ramdoss yes. Need an Ambari I'd with admin / super user access.
Created ‎09-27-2016 06:20 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I've published a CLI tool to handle all of this more easily including auto-generating the payload, inferring the cluster name and services to check etc. It has --help with lots of options, including features for --wait which tracks the progress status of the request and returns only when complete, and --cancel to stop any outstanding service checks if you accidentally launch too many by playing with the tool 🙂
You can find it on my github here:
https://github.com/harisekhon/pytools
./ambari_trigger_service_checks.py --help
examples:
./ambari_trigger_service_checks.py --all
./ambari_trigger_service_checks.py --cancel
./ambari_trigger_service_checks.py --services hdfs,yarn --wait
