Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

YARN Applications wait for long time in Accepted State

avatar
Contributor

Hello All,

We are using HDP-2.5.6 in Production cluster. We have 5 queues configured in capacity scheduler.

We have given 840 GB of total memory to Yarn. Preemption is enabled for all queue.

Out of that for 'talend' queue we have given 25% as min capacity and 100% as max capacity. Attached screenshot for 'talend' queue talend-queue.png.

The problem is, though 'talend' queue has lot of resources available applications are going into accepted state for long time. Attached screenshot for accepted application accepted-application.png. Some applications are even in accepted state for more that 5 hours.

Diagnostic section is showing wrong AM resource.

In ResourceManager logs, we are getting following message:

2018-08-10 06:12:13,129 INFO  capacity.LeafQueue (LeafQueue.java:activateApplications(662)) - Not activating application application_1529949441873_131872 as  amIfStarted: <memory:348160, vCores:85> exceeds amLimit: <memory:344064, vCores:1>
2018-08-10 06:12:13,129 INFO  capacity.LeafQueue (LeafQueue.java:activateApplications(662)) - Not activating application application_1529949441873_131873 as  amIfStarted: <memory:348160, vCores:85> exceeds amLimit: <memory:344064, vCores:1>
2018-08-10 06:12:13,129 INFO  capacity.LeafQueue (LeafQueue.java:activateApplications(662)) - Not activating application application_1529949441873_131874 as  amIfStarted: <memory:348160, vCores:85> exceeds amLimit: <memory:344064, vCores:1>

We have AM resources available in queue but it shows exceeds amLimit message. Also amLimit is not correct.

Why applications are going in accepted state though we have lot of resources available at queue level?

Please suggest.

1 ACCEPTED SOLUTION

avatar
Contributor

Resource Manager Restart cleans cache of RM and resolves issue.

View solution in original post

12 REPLIES 12

avatar
Explorer

what is the max, min container size you have in your YARN configuration ?

avatar
Contributor

@Mohammad Hamdan

Min container size is 4GB and max container size is 8GB.

avatar
Explorer

Could you open RM UI and share a screen shot for the main page as first step ?

also can you share a full log for RM active node as next step ?

avatar
Contributor

Please find attached RM UI and RM Scheduler screenshou. Will attach RM logs soon.
Please suggest.rm-scheduler.png rm-ui.png

avatar
Explorer

What is the value of amshare on leafqueue try to set it to -1.0f

avatar
Contributor

amshare value is 0.4 i.e. 40%. Whats the meaning of -1.0f? I did not get.

avatar
Explorer

Could you go to ambari -> yarn -> Advanced -> Scheduler -> Capacity Scheduler. and paste it here.

avatar
Contributor

@Mohammad Hamdan

Here is content of capacity scheduler. Please suggest:

yarn.scheduler.capacity.maximum-am-resource-percent=0.2
yarn.scheduler.capacity.maximum-applications=10000
yarn.scheduler.capacity.node-locality-delay=40
yarn.scheduler.capacity.root.accessible-node-labels=*
yarn.scheduler.capacity.root.acl_administer_queue=*
yarn.scheduler.capacity.root.capacity=100
yarn.scheduler.capacity.root.default.acl_submit_applications=*
yarn.scheduler.capacity.root.default.capacity=25
yarn.scheduler.capacity.root.default.maximum-capacity=50
yarn.scheduler.capacity.root.default.state=RUNNING
yarn.scheduler.capacity.root.default.user-limit-factor=2
yarn.scheduler.capacity.root.queues=default,hs2,longrun,sqoop,talend
yarn.scheduler.capacity.queue-mappings=u:talenduser:talend,u:dbenefi:longrun,u:sqoop:sqoop
yarn.scheduler.capacity.queue-mappings-override.enable=false
yarn.scheduler.capacity.root.default.maximum-am-resource-percent=1
yarn.scheduler.capacity.root.hs2.acl_submit_applications=*
yarn.scheduler.capacity.root.hs2.capacity=25
yarn.scheduler.capacity.root.hs2.maximum-am-resource-percent=0.4
yarn.scheduler.capacity.root.hs2.maximum-applications=500
yarn.scheduler.capacity.root.hs2.maximum-capacity=100
yarn.scheduler.capacity.root.hs2.ordering-policy=fair
yarn.scheduler.capacity.root.hs2.ordering-policy.fair.enable-size-based-weight=true
yarn.scheduler.capacity.root.hs2.state=RUNNING
yarn.scheduler.capacity.root.hs2.user-limit-factor=4
yarn.scheduler.capacity.root.longrun.acl_submit_applications=*
yarn.scheduler.capacity.root.longrun.capacity=15
yarn.scheduler.capacity.root.longrun.maximum-capacity=15
yarn.scheduler.capacity.root.longrun.state=RUNNING
yarn.scheduler.capacity.root.longrun.user-limit-factor=1
yarn.scheduler.capacity.root.sqoop.acl_submit_applications=*
yarn.scheduler.capacity.root.sqoop.capacity=10
yarn.scheduler.capacity.root.sqoop.maximum-am-resource-percent=1
yarn.scheduler.capacity.root.sqoop.maximum-capacity=100
yarn.scheduler.capacity.root.sqoop.state=RUNNING
yarn.scheduler.capacity.root.sqoop.user-limit-factor=10
yarn.scheduler.capacity.root.talend.acl_submit_applications=*
yarn.scheduler.capacity.root.talend.capacity=25
yarn.scheduler.capacity.root.talend.maximum-am-resource-percent=0.4
yarn.scheduler.capacity.root.talend.maximum-applications=500
yarn.scheduler.capacity.root.talend.maximum-capacity=100
yarn.scheduler.capacity.root.talend.ordering-policy=fair
yarn.scheduler.capacity.root.talend.ordering-policy.fair.enable-size-based-weight=true
yarn.scheduler.capacity.root.talend.state=RUNNING
yarn.scheduler.capacity.root.talend.user-limit-factor=4


avatar
Explorer

Try this config. instead and let's see the logs please (make sure you replace spaces with new lines 🙂 )

yarn.scheduler.capacity.maximum-am-resource-percent=0.2
yarn.scheduler.capacity.maximum-applications=10000
yarn.scheduler.capacity.node-locality-delay=40
yarn.scheduler.capacity.root.accessible-node-labels=*
yarn.scheduler.capacity.root.acl_administer_queue=*
yarn.scheduler.capacity.root.capacity=100
yarn.scheduler.capacity.root.default.acl_submit_applications=*
yarn.scheduler.capacity.root.default.capacity=0
yarn.scheduler.capacity.root.default.maximum-capacity=0
yarn.scheduler.capacity.root.default.state=RUNNING
yarn.scheduler.capacity.root.default.user-limit-factor=2 yarn.scheduler.capacity.root.queues=default,hs2,longrun,sqoop,talend
yarn.scheduler.capacity.queue-mappings=u:talenduser:talend,u:dbenefi:longrun,u:sqoop:sqoop yarn.scheduler.capacity.queue-mappings-override.enable=false
yarn.scheduler.capacity.root.default.maximum-am-resource-percent=1 yarn.scheduler.capacity.root.hs2.acl_submit_applications=*
yarn.scheduler.capacity.root.hs2.capacity=25
yarn.scheduler.capacity.root.hs2.maximum-am-resource-percent=0.4
yarn.scheduler.capacity.root.hs2.maximum-applications=500
yarn.scheduler.capacity.root.hs2.maximum-capacity=100
yarn.scheduler.capacity.root.hs2.ordering-policy=fair
yarn.scheduler.capacity.root.hs2.ordering-policy.fair.enable-size-based-weight=true yarn.scheduler.capacity.root.hs2.state=RUNNING
yarn.scheduler.capacity.root.hs2.user-limit-factor=4
yarn.scheduler.capacity.root.longrun.acl_submit_applications=*
yarn.scheduler.capacity.root.longrun.capacity=15
yarn.scheduler.capacity.root.longrun.maximum-capacity=15
yarn.scheduler.capacity.root.longrun.state=RUNNING
yarn.scheduler.capacity.root.longrun.user-limit-factor=1
yarn.scheduler.capacity.root.sqoop.acl_submit_applications=*
yarn.scheduler.capacity.root.sqoop.capacity=10
yarn.scheduler.capacity.root.sqoop.maximum-am-resource-percent=1
yarn.scheduler.capacity.root.sqoop.maximum-capacity=100
yarn.scheduler.capacity.root.sqoop.state=RUNNING
yarn.scheduler.capacity.root.sqoop.user-limit-factor=10
yarn.scheduler.capacity.root.talend.acl_submit_applications=*
yarn.scheduler.capacity.root.talend.capacity=25
yarn.scheduler.capacity.root.talend.maximum-applications=500
yarn.scheduler.capacity.root.talend.maximum-capacity=100
yarn.scheduler.capacity.root.talend.ordering-policy=fair
yarn.scheduler.capacity.root.talend.ordering-policy.fair.enable-size-based-weight=true yarn.scheduler.capacity.root.talend.state=RUNNING
yarn.scheduler.capacity.root.talend.user-limit-factor=4