Member since
01-25-2017
396
Posts
28
Kudos Received
11
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
864 | 10-19-2023 04:36 PM | |
4426 | 12-08-2018 06:56 PM | |
5535 | 10-05-2018 06:28 AM | |
20097 | 04-19-2018 02:27 AM | |
20119 | 04-18-2018 09:40 AM |
05-10-2017
09:10 PM
If you decomissioned the node before then it better to recomission it. No need to format any disk from the node if you had a HW issue, unless you replace a disk on the server, you should format the new disk. If you didn't removed or decomissioned the node, so nothing you should do, the node will join the cluster.
... View more
05-09-2017
01:13 PM
@hrishi1dypim Are you run it from impa-shell or Hue?
... View more
05-09-2017
01:10 PM
@code0404 See how i'm doing this, but i collect the aggregate metrics per pool, you can just use by the application name or the user. STARTDATE=`date -d " -1 day " +%s%N | cut -b1-13` ENDDATE=`date +%s%N | cut -b1-13` result=`curl -s http://your-yarn-history-server:8088/ws/v1/cluster/apps?finishedTimeBegin=$STARTDATE&finishedTimeEnd=$ENDDATE` echo $result | python -m json.tool | sed 's/["|,]//g' | grep -E "queue|coreSeconds" | awk -v DC="$DC" ' /queue/ { queue = $2 } /vcoreSeconds/ { arr[queue]+=$2 ; } END { for (x in arr) {print DC ".yarn." x ".cpums="arr[x]} } ' echo $result | python -m json.tool | sed 's/["|,]//g' | grep -E "queue|memorySeconds" | awk -v DC="$DC" ' /queue/ { queue = $2 } /memorySeconds/ { arr1[queue]+=$2 ; } END { for (y in arr1) {print DC ".yarn." y ".memorySeconds="arr1[y]} } '
... View more
05-09-2017
09:49 AM
@code0404 Do you have spark jobs or all your jobs are mapreduce? normally summation of vcores and memory seconds are enough.
... View more
05-09-2017
07:55 AM
You can increase the time series to store more historical data. What i did in my side, i write a script that collect the data daily and populate it to graphana.
... View more
05-09-2017
07:55 AM
You can increase the time series to store more historical data. What i did in my side, i write a script that collect the data daily and populate it to graphana.
... View more
05-08-2017
10:30 PM
Hi @code0404 Yes, you can use the Yarn histort API Please see the following documntation: https://hadoop.apache.org/docs/r2.6.0/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html
... View more
05-03-2017
12:58 PM
all looks fine, you can even reduce yarn.nodemanager.resource.memory-mb
... View more
05-03-2017
10:16 AM
No need to increase the RAM for the server, it will enough to increase the memory for the NN service, 125GB is too much, increasing the memory willn't solve the issue, but it will not overload the NN.
... View more
04-27-2017
11:28 PM
1 Kudo
This is a bug that fixed at 5.10.1
... View more