Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Resource quotas within a single YARN queue

avatar
Expert Contributor

We have defined several YARN queues. Say that you have queue Q1, where users A and B run Spark processes.

If A submits a job that demands all of the queue resources, they are allocated by YARN. Subsequently, when B submits his job, he is affected by resource scarcity.

We need to prevent this situation, by assigning resources more evenly between A and B (and all other incoming users), within Q1. We have already set Scheduler to Fair. Can this eager resource allocation behaviour be prevented?

1 ACCEPTED SOLUTION

avatar
Contributor
hide-solution

This problem has been solved!

Want to get a detailed solution you have to login/registered on the community

Register/Login
3 REPLIES 3

avatar
Master Mentor

@Fernando Lopez Bello

Can you share your CapacityScheduler config

avatar
Expert Contributor

yarn.scheduler.capacity.maximum-am-resource-percent=0.2
yarn.scheduler.capacity.maximum-applications=10000
yarn.scheduler.capacity.node-locality-delay=40
yarn.scheduler.capacity.root.accessible-node-labels=*
yarn.scheduler.capacity.root.acl_administer_queue=*
yarn.scheduler.capacity.root.capacity=100
yarn.scheduler.capacity.root.default.acl_submit_applications=*
yarn.scheduler.capacity.root.default.capacity=10
yarn.scheduler.capacity.root.default.maximum-capacity=30
yarn.scheduler.capacity.root.default.state=RUNNING
yarn.scheduler.capacity.root.default.user-limit-factor=2
yarn.scheduler.capacity.root.queues=Hive,Zeppelin,default
yarn.scheduler.capacity.queue-mappings=u:zeppelin:Zeppelin,u:hdfs:Hive,g:dl-analytics-group:Zeppelin
yarn.scheduler.capacity.queue-mappings-override.enable=false
yarn.scheduler.capacity.root.Hive.acl_administer_queue=*
yarn.scheduler.capacity.root.Hive.acl_submit_applications=*
yarn.scheduler.capacity.root.Hive.capacity=50
yarn.scheduler.capacity.root.Hive.maximum-capacity=90
yarn.scheduler.capacity.root.Hive.minimum-user-limit-percent=25
yarn.scheduler.capacity.root.Hive.ordering-policy=fair
yarn.scheduler.capacity.root.Hive.ordering-policy.fair.enable-size-based-weight=false
yarn.scheduler.capacity.root.Hive.priority=10
yarn.scheduler.capacity.root.Hive.state=RUNNING
yarn.scheduler.capacity.root.Hive.user-limit-factor=2
yarn.scheduler.capacity.root.Zeppelin.acl_administer_queue=*
yarn.scheduler.capacity.root.Zeppelin.acl_submit_applications=*
yarn.scheduler.capacity.root.Zeppelin.capacity=40
yarn.scheduler.capacity.root.Zeppelin.maximum-capacity=80
yarn.scheduler.capacity.root.Zeppelin.minimum-user-limit-percent=20
yarn.scheduler.capacity.root.Zeppelin.ordering-policy=fair
yarn.scheduler.capacity.root.Zeppelin.ordering-policy.fair.enable-size-based-weight=false
yarn.scheduler.capacity.root.Zeppelin.priority=5
yarn.scheduler.capacity.root.Zeppelin.state=RUNNING
yarn.scheduler.capacity.root.Zeppelin.user-limit-factor=3
yarn.scheduler.capacity.root.default.minimum-user-limit-percent=25
yarn.scheduler.capacity.root.default.ordering-policy=fair
yarn.scheduler.capacity.root.default.ordering-policy.fair.enable-size-based-weight=false
yarn.scheduler.capacity.root.default.priority=0
yarn.scheduler.capacity.root.maximum-capacity=100
yarn.scheduler.capacity.root.ordering-policy=priority-utilization
yarn.scheduler.capacity.root.priority=0

avatar
Contributor
hide-solution

This problem has been solved!

Want to get a detailed solution you have to login/registered on the community

Register/Login