Community Articles
Find and share helpful community-sourced technical articles
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.
Labels (1)
Community Manager
In order to limit who can administer pools and applications, the appropriate YARN and resource pool configuration need to be in place.

Two access control levels are in place:
  • YARN administration
  • Scheduler configuration
Any user whom is able to administer YARN is also able to administer applications. Administrating applications includes killing and moving them from pool to pool. The YARN administrator is a super user with access to all pools and applications.

Beside the YARN administrator(s) the schedulers have their own pool access control lists. These access control lists define the users and groups that can administer a pool (Administration Access Control) or can submit to a pool (Submission Access Control). The Fair Scheduler configuration, default and recommended for CDH, is discussed in the Dynamic Resource Pool documentation. The Capacity Scheduler uses its own XML based configuration in Cloudera Manager.

Pool access is inherited from the parent pool for both administration and submit access. If access is granted on a parent pool access is also granted on the child pool. By default the root pool, parent of all pools, allows anyone to submit and administer applications. Since all child pools inherit this configuration, the root pool must first be configured with a list of users and groups who can submit and administer (kill) applications in that pool. A pool ACL does not have to give access to a specific user or group and can be empty. These empty ACL's are shown in the configuration as a single space.

The application owner is always allowed to administer the application that was submitted. That access can not be removed.


Restricting Administrative Access

Using the following steps to restrict users from being able to administer YARN viaCloudera Manager:
  1. Login to Cloudera Manager
  2. Navigate to YARN -> Configuration -> Service-Wide -> Enable ResourceManager ACLs
  3. Check to Enable (default value is on in Cloudera Manager)
  4. Navigate to YARN > Configuration > Service-Wide > Admin ACL
  5. List only the users groups who will have the ability to administer YARN (see below for the format of the entry)
  6. Save changes and restart.
Note: An empty value for the yarn.admin.acl is not considered a valid value by YARN and it will fall back on the value configured in the yarn-default.xml which will allow access to allow everyone.
Note: If the user configured as the yarn.system.user is not added to the ACL then that will break the command line yarn command and Cloudera Manager's ability to update the scheduler configuration. The ACL should always contain that user (default user is yarn).

If the cluster is not managed by Cloudera Manager the following settings must be updated in the yarn-site.xml file on the Resource Managers and restart the Resource Managers:
<value>users groups</value>


See below for the format of the ACL entry.


Restricting Pool Access

Use the following steps to limit who can submit and administer applications in a pool:
  1. Login to Cloudera Manager
  2. Navigate to Clusters -> Dynamic Resource Pools -> Configuration
  3. Click the Edit button to the right (next to the pool). A pop-up window will appear.
  4. Click on the Submission Access Control tab and list users and groups who will have permission to submit in all pools and subpools.
  5. Click the Administration Access Control tab and list users and groups who will have permission to kill submitted jobs in all pools and subpools.
  6. Click OK to save.
  7. Cloudera Manager will now automatically attempt to refresh the Resource Manager's scheduler configuration.
  8. Repeat the process for all pools configured
Note: Administration Access or Application Ownership are the only permissions with the ability to kill applications.

If the cluster is not managed by Cloudera Manager the following settings must be updated in the fair-scheduler.xml file on all Resource Managers for each pool that require it:


<aclSubmitApps>users groups</aclSubmitApps> 
<aclAdministerApps>users groups</aclAdministerApps>
After that change has been saved on both Resource Managers run the following command to reload the configuration on one of the Resource Managers:
yarn rmadmin -refreshQueues

ACL entry format


The format of the ACL entry is a list of users and groups separated by comma's, the user and group lists are separated by a space:
user1,user2 group1,group2
  • If no space is found all values are considered to be users.
  • All values after the first space are considered to be groups.
  • If there are no users the entry starts with a space.

Audit logging


The Resource Manager will log attempts to kill or move applications. These messages are logged by the RMAuditLogger as part of standard operational logging. The following exception is thrown when attempting to kill an application by the user test1 submitted by a different user:

2015-03-31 11:08:09,375 WARN org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=test1 IP= OPERATION=Kill Application Request TARGET=ClientRMService RESULT=FAILURE DESCRIPTION=Unauthorized user PERMISSIONS=User doesn't have permissions to MODIFY_APP APPID=application_1427825170761_0001

2015-03-31 11:08:09,384 INFO org.apache.hadoop.ipc.Server: IPC Server handler 46 on 8032, call org.apache.hadoop.yarn.api.ApplicationClientProtocolPB.forceKillApplication from Call#1 Retry#0 org.apache.hadoop.yarn.exceptions.YarnException: User test1 cannot perform operation MODIFY_APP on application_1427825170761_0001



 NOTE: This article was taken from our internal Knowledge Base.  To access the original article please use the following link (customer login required):

0 Kudos
New Contributor

After this chnages has been which works well though, the log from the RM UI or history doesn't show up and says user is not authorized to view logs.

I have enable the Kerberos Authentication for HTTP consoles as well in yarn.


Have any one of you faced the same issue?

Don't have an account?
Coming from Hortonworks? Activate your account here
Version history
Revision #:
11 of 11
Last update:
‎08-13-2015 05:47 PM
Updated by:
Top Kudoed Authors