Support Questions

Find answers, ask questions, and share your expertise

How to disable user limits in Yarn Capacity Scheduler when queues are created

avatar
Rising Star

When we create queue it's creating below default value. Is there anyway to disable to this feature, to allow individual users to use cluster capacity max when it's available.

scheduler.capacity.root.default.user-limit-factor=1

1 ACCEPTED SOLUTION

avatar

if you would like to use maximum of cluster capacity when available, you need to keep user-limit-factor as 2/3/4 depending upon your queue capacity , if your queue capacity is 25% of total cluster capacity , you can keep ULF to at most 4 , which would mean this user can utilize 400 % of its queue capacity.

Condition : Queue max capacity should be more than its capacity , say 50 % capacity and 100 % max capacity to utilize above parameter.

View solution in original post

3 REPLIES 3

avatar
Super Guru
@RajuKV

Can you please elaborate on "when we create queue it's creating below default value"? Creating what below default value?

I think the property you are looking for is yarn.scheduler.capacity.root.support.services.minimum-user-limit-percent

But please confirm and see the following link.

http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.4/bk_yarn_resource_mgt/content/setting_user_li...

avatar

if you would like to use maximum of cluster capacity when available, you need to keep user-limit-factor as 2/3/4 depending upon your queue capacity , if your queue capacity is 25% of total cluster capacity , you can keep ULF to at most 4 , which would mean this user can utilize 400 % of its queue capacity.

Condition : Queue max capacity should be more than its capacity , say 50 % capacity and 100 % max capacity to utilize above parameter.

avatar
Rising Star

I am fully aware about the user_limits. My question was is there option to turn it Off/On. As per my understanding there is no way we can disable the user_limits, Only way is to optimize it. Do anyone has disagreement with this statemet, please let me know.