Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

ACLs not working in Capacity Scheduler in YARN (CDH5)

Highlighted

ACLs not working in Capacity Scheduler in YARN (CDH5)

Explorer

I am trying to implement simple ACLs for the capacity scheduler in CDH5. I basically want all users to be able to submit to the default queue, but only certain users to administer the default queue. I am starting with a single user in the "yarn.scheduler.capacity.root.acl_administer_queue" property.

 

My account is jehalter, should be the only user who can administer jobs (the only account listed in the acl_administer_queue property), and mapred queue -showacls looks correct:

 

 

[jehalter@dbk-i1 cdh5]$ mapred queue -showacls
14/06/25 15:53:44 INFO client.RMProxy: Connecting to ResourceManager at dbk-hm2/192.168.80.12:8032
Queue acls for user : jehalter

Queue Operations
=====================
root ADMINISTER_QUEUE,SUBMIT_APPLICATIONS
default ADMINISTER_QUEUE,SUBMIT_APPLICATIONS

 

 

I have a test user, urctest1, who should only be able to submit and NOT administer jobs for the default queue.  The output of mapred queue -showacls looks correct for him too:

 

 

[urctest1@dbk-i1 ~]$ mapred queue -showacls
14/06/25 15:27:25 INFO client.RMProxy: Connecting to ResourceManager at dbk-hm2/192.168.80.12:8032
Queue acls for user : urctest1

Queue Operations
=====================
root SUBMIT_APPLICATIONS
default SUBMIT_APPLICATIONS

 

 

ADMINISTER_QUEUE is clearly missing from urctest1's list of Queue Operations. However, if urctest1 submits a request to kill another user's job (for example a job submitted by jehalter), the resource manager gladly accepts the request and executes it:

 

 

[urctest1@dbk-i1 ~]$ mapred job -list
14/06/25 15:27:38 INFO client.RMProxy: Connecting to ResourceManager at dbk-hm2/192.168.80.12:8032
Total jobs:2
JobId State StartTime UserName Queue Priority UsedContainers RsvdContainers UsedMem RsvdMem NeededMem AM info
job_1403723317811_0004 RUNNING 1403724291216 jehalter default NORMAL 71 0 181760M 0M 181760M http://dbk-hm2:8088/proxy/application_1403723317811_0004/
job_1403723317811_0001 RUNNING 1403724067649 urctest2 default NORMAL 73 0 186880M 0M 186880M http://dbk-hm2:8088/proxy/application_1403723317811_0001/

 

 

[urctest1@dbk-i1 ~]$ mapred job -kill job_1403723317811_0004
14/06/25 15:28:08 INFO client.RMProxy: Connecting to ResourceManager at dbk-hm2/192.168.80.12:8032
Killed job job_1403723317811_0004

 

 

So user urctest1 was able to successfully kill a job owned by jehalter, even with yarn.scheduler.capacity.root.default.acl_administer_queue set to disallow it. My configuration follows:

 

 

<configuration>
<property>
<name>yarn.scheduler.capacity.maximum-applications</name>
<value>100</value>
</property>
<property>
<name>yarn.scheduler.capacity.maximum-am-resource-percent</name>
<value>0.1</value>
</property>
<property>
<name>yarn.scheduler.capacity.resource-calculator</name>
<value>org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator</value>
</property>
<property>
<name>yarn.scheduler.capacity.root.queues</name>
<value>default</value>
</property>
<property>
<name>yarn.scheduler.capacity.root.default.capacity</name>
<value>100</value>
</property>
<property>
<name>yarn.scheduler.capacity.root.default.user-limit-factor</name>
<value>1</value>
</property>
<property>
<name>yarn.scheduler.capacity.root.default.minimum-user-limit-percent</name>
<value>25</value>
</property>
<property>
<name>yarn.scheduler.capacity.root.default.maximum-capacity</name>
<value>100</value>
</property>

<property>
<name>yarn.scheduler.capacity.root.default.state</name>
<value>RUNNING</value>
</property>
<property>
<name>yarn.scheduler.capacity.root.default.acl_submit_applications</name>
<value>*</value>
</property>
<property>
<name>yarn.scheduler.capacity.root.acl_administer_queue</name>
<value>jehalter</value>
</property>
<property>
<name>yarn.scheduler.capacity.root.default.acl_administer_queue</name>
<value>jehalter</value>
</property>
<property>
<name>yarn.scheduler.capacity.node-locality-delay</name>
<value>40</value>
</property>
<property>
<name>yarn.scheduler.capacity.root.default.capacity</name>
<value>100</value>
<description>Default queue target capacity.</description>
</property>
</configuration>

1 REPLY 1

Re: ACLs not working in Capacity Scheduler in YARN (CDH5)

Master Guru
Do you have ACLs enabled at the service level? They are disabled by default. The control property is "yarn.acl.enable".

Applications can submit their own ACL requirements which would trump Queue level ACL rules. You can check the Application's ACL rules on its information/job configuration pages. If the app allows other users to MODIFY them, the kill will succeed through. Otherwise, the kill will succeed iff the other user is also a queue administrator.
Don't have an account?
Coming from Hortonworks? Activate your account here