Support Questions
Find answers, ask questions, and share your expertise

Capacity Scheduler Maximum Capacity

Hi, In our setup, we are using YARN Capacity Scheduler and have many queues setup in a hierarchical fashion with a well configured minimum capacities. However, wondering what is the best practice for setting maximum capacity value i.e. for the parameter yarn.scheduler.capacity.<queue-path>.maximum-capacity? Is it advisable to have each queue configured with a maximum capacity of 100% or something like 90 to 95% with some leeway for the default queue? In summary, what are the best practices to leverage maximum cluster capacity while its available while honouring the minimum queue capacities?



@Greenhorn Techie I think there is no best practice which can be suggested beforehand. Its incremental tuning which is required here based on what kind of jobs / workload you are expecting , which queues you consider to be critical ones , which queue could have predictable workload etc.

For example , for a critical department / project's queue which generally will have heavy duty reporting job but would trigger only once a week, you can consider giving a higher maximum capacity such as 70 - 80 % (of overall capacity) so if and when required it could utilize "idle" cluster resources across its defined capacity.

Again, this depends upon understanding overall cluster needs and business requirements. Unless required for a mission critical job, I'd not set very higher max capacity for queues in general as could prove to be bottleneck for remaining queues. With time and experience of how queue is used and what its need are, these values should be tuned incrementally.

Hi @Gaurav Sharma Thanks for your response. Yes, while I understand that there might not be a best practice around maximum capacity, I wonder why cannot each queue has set the maximum value as 100% if we have minimum capacities configured properly along with pre-emption etc?

In your example, you mentioned 70-80%. But again with minimum cluster capacity parameters, in what circumstances there might be "bottlenecks" for resources?

Further questions:

1. Is the maximum capacity derived from the overall global cluster capacity or is it only from the parent queue's capacity?

2. Is there anything like a "default" queue in HDP setup that is mandatory? (sorry I could test this, but wanted to see if there is any ready answer)

Hey as for your information if you are the new user of windows 10 and probably if you face any type of issue with your operating system then here you will get windows 10 tech support from our tutorial and troubleshoot your pc issue yourself.

Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.