Created on 05-08-2017 03:07 PM - edited 09-16-2022 04:34 AM
Hi,
How do find Aggregate Resource Allocation for an application run on Yarn ?
I know Yarn application -status <application Id> provides that info, but it doesn't work for historical jobs as they won't present in Resource Manager then. I see I could find this information from Cloudera Manager though. Is Cloudera Manager preserving this information upon their expiry ? if so, Is there any API that I can leverage to retrieve this info for historical jobs.
Alternatively, I glanced through the job history server counters but couldn't tie out while reconciling which gives me a strange feeling that I might be looking at wrong counters.
The information I am looking for is highlighted below
yarn application -status application_XXXXXXXXXXX_XXX97
Application-Id : application_XXXXXXXXXXX_XXX97
Application-Name : blah blah blah
Application-Type : MAPREDUCE
User : user1
Queue : queue1
Start-Time : 1494279901683
Finish-Time : 1494280086823
Progress : 100%
State : FINISHED
Final-State : SUCCEEDED
Aggregate Resource Allocation : 674854 MB-seconds, 329 vcore-seconds
Log Aggregation Status : SUCCEEDED
Diagnostics :
Interested in either using an API if this information is readily available or how do I calculate it from the available data in JOb history counters.
Thanks in advance
Created 05-09-2017 07:55 AM
Created 05-08-2017 10:30 PM
Hi @code0404 Yes, you can use the Yarn histort API
Please see the following documntation:
https://hadoop.apache.org/docs/r2.6.0/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html
Created 05-09-2017 07:39 AM
@Fawze wrote:Hi @code0404 Yes, you can use the Yarn histort API
Please see the following documntation:
https://hadoop.apache.org/docs/r2.6.0/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html
Hi @Fawze
Thanks for sharing the API, I looked at this before. The problem with this API is, it will only retrieve application stastics that is still available in RM. I reckon, application that are older than 24-48 hours are moved out of RM to JHIST server. So those application stats are no longer retrievable. I can do batch collection everyday and store them locally for retrieving later, but I am keen on understanding how Cloudera Manager is retrieving this info i.e. via some API or logs from cluster or uses its own internal database with frequent backend stats collection.
Created 05-09-2017 07:55 AM
Created 05-09-2017 07:55 AM
Created 05-09-2017 08:03 AM
Makes sense, I wanted to do the same... Like extract and load into db for later retrieval but was wondering if they were stored historically anywhere. Thanks Fawze. I will accept this as answer if there is no other way to retrieve.
Created 05-09-2017 08:07 AM
Another question on the same lines, if you don't mind answering. What is the criteria to calculate the aggregate resource allocation, meaning, is it possible to arrive at those figures by using any of the counters in Job History server for the applications ?
Created on 05-09-2017 09:49 AM - edited 05-09-2017 09:50 AM
@code0404 Do you have spark jobs or all your jobs are mapreduce?
normally summation of vcores and memory seconds are enough.
Created 05-09-2017 01:01 PM
@Fawze We have both but majority of them are MapReduce. They all run on YARN though. I could use Spark history to collect metrics for that but for MapReduce, still trying.
I tried summation of Vcores seconds taken my Mapper & Reducer, yet they don't tally with the Aggregated value I see in Cloudera Manager or in Yarn Application -status <app_id>. Perhaps, I am overlooking something during conversion. I am doing Milliseconds to seconds.. Hope, I am doing it right
Created 05-09-2017 01:10 PM
See how i'm doing this, but i collect the aggregate metrics per pool, you can just use by the application name or the user.
STARTDATE=`date -d " -1 day " +%s%N | cut -b1-13`
ENDDATE=`date +%s%N | cut -b1-13`
result=`curl -s http://your-yarn-history-server:8088/ws/v1/cluster/apps?finishedTimeBegin=$STARTDATE&finishedTimeEnd=$ENDDATE`
echo $result | python -m json.tool | sed 's/["|,]//g' | grep -E "queue|coreSeconds" | awk -v DC="$DC" ' /queue/ { queue = $2 }
/vcoreSeconds/ { arr[queue]+=$2 ; }
END { for (x in arr) {print DC ".yarn." x ".cpums="arr[x]} } '
echo $result | python -m json.tool | sed 's/["|,]//g' | grep -E "queue|memorySeconds" | awk -v DC="$DC" ' /queue/ { queue = $2 }
/memorySeconds/ { arr1[queue]+=$2 ; }
END { for (y in arr1) {print DC ".yarn." y ".memorySeconds="arr1[y]} } '