Member since
10-19-2017
11
Posts
0
Kudos Received
0
Solutions
12-17-2017
10:06 PM
@deepak rathod
Hadoop Yarn Cluster Applications API supports to filter failed jobs for last 24 hours.
* http://<rm http address:port>/ws/v1/cluster/apps
**Please refer to the attached .txt file for the commands because Community web page is replacing some characters with some weird symbols**
Example:- GET "http://Resource-Manager-Address:8088/ws/v1/cluster/apps?limit=10&startedTimeBegin=1510533313778&startedTimeEnd=1513533313778&states=FAILED" refer to attached txt file Query 1.
Use the above command as a reference and replace with your resource manager ipaddress and port number.
Explanation:-
i'm limiting the query results to 10 by specifying limit=10 parameter and started and finished times have a begin and end parameter to allow you to specify ranges and we need only the jobs having states FAILED.
Start Time :- GMT: Monday, November 13, 2017 12:35:13.778 AM
Finish Time:- GMT: Sunday, December 17, 2017 5:55:13.778 PM
The whole rest api call is going to result the jobs that got state as Failed for the period Nov 13 12:35:13.778 - Dec17 5:55:13.778 with first 10 applications(limit=10).
As You can change the start begin and end times according to your requirements.
Note:- startedTimeBegin,startedTimeEnd are specified in milliseconds since epoch
2. Query To get all the apps having states as FINISHED,KILLED by the specific user for specific time period GET "http://Resource-Manager-Address:8088/ws/v1/cluster/apps?limit=20&states=FINISHED,KILLED&user=<user-id>&startedTimeBegin=1510533313778&startedTimeEnd=1513533313778" refer to attached txt file Query 2.
Below is the list of App states allowed in the query as we can use one state or more states in the query
NEW, NEW_SAVING, SUBMITTED, ACCEPTED, RUNNING, FINISHED, FAILED, KILLED
In Addition
Supported Query Parameters
* states - applications matching the given application states, specified as a comma-separated list.
* finalStatus - the final status of the application - reported by the application itself
* user - user name
* queue - queue name
* limit - total number of app objects to be returned
* startedTimeBegin - applications with start time beginning with this time, specified in ms since epoch
* startedTimeEnd - applications with start time ending with this time, specified in ms since epoch
* finishedTimeBegin - applications with finish time beginning with this time, specified in ms since epoch
* finishedTimeEnd - applications with finish time ending with this time, specified in ms since epoch
* applicationTypes - applications matching the given application types, specified as a comma-separated list.
* applicationTags - applications matching any of the given application tags, specified as a comma-separated list.
* deSelects - a generic fields which will be skipped in the result.
To get failed jobs for the specific user and for specific time period GET "http://Resource-Manager-Address:8088/ws/v1/cluster/apps?limit=1&startedTimeBegin=1510533313778&startedTimeEnd=1513533313778&states=FAILED&user=<user-id>" refer to attached txt file Query 3.
To get finished jobs for the specific user
GET "http://Resource-Manager-Address:8088/ws/v1/cluster/apps?limit=1&states=FINISHED&user=<user-id>"
refer to attached txt file Query 4.
To get finished jobs based on the application type
In the below query i'm resulting tez application type finished jobs and limiting the results to 1.
GET "http://Resource-Manager-Address:8088/ws/v1/cluster/apps?limit=1&states=FINISHED&applicationTypes=tez"
refer to attached txt file Query 5.
get spark application type jobs
GET "http://Resource-Manager-Address:8088/ws/v1/cluster/apps?limit=1&states=FINISHED&applicationTypes=spark"
refer to attached txt file Query 6
As we can use any or combination of above parameters in our Rest Api Queries. cluster-applications-api.txt
For Reference
https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html#Cluster_Applications_API
... View more
10-30-2017
07:07 PM
Hi @deepak rathod yes, you are using HDP-2.3.2.0. You need to upgrade to HDP-2.6.2.0. Here is the doc: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.2/index.html https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.2.0/bk_ambari-upgrade/content/ambari_upgrade_guide.html
... View more
01-22-2019
07:31 PM
@deepak rathod I have encountered the same issue. Did you able to fix it? If yes, can you please share the solution here. Thanks in advance.
... View more