Support Questions

Find answers, ask questions, and share your expertise
Celebrating as our community reaches 100,000 members! Thank you!

HDFS Cannot change permissions of a single folder. No error is printed in logs or on the CLI.




Cannot change permissions of a single folder on HDFS. Can change permissions of all the other folders no problem. Except this one folder. Current permissions are:


drwxrwx--x   - app app          0 2017-03-28 14:29 /app/drops

None of these commands work to change the permission and nothing is printed, not even a permission denied is printed:


root@ss01nn01 # hdfs dfs -setfacl -m other::r-x /app/drops
root@ss01nn01 # hdfs dfs -chmod 775 /app/drops


I've enabled more debugging in HDFS via Cloudera Manager, but still nothing.


hdfs fsck /


prints no issues.  What else can we try to figure out the issue here and set the permissions?




Do you use Sentry with HDFS ACL Sync enabled in your cluster, i.e. is HDFS
-> Configuration -> "Enable Sentry Synchronization" checked?

If yes, is /app or /app/drops configured as a path prefix under HDFS ->
Configuration -> "Sentry Synchronization Path Prefixes"?

If yes, then Sentry is currently managing all permissions for that path,
and will ignore any type of change you try to make. You can use GRANT
statements in Hive or Impala to add explicit access to tables or databases
using this path as their location field, but direct manipulation of
permissions will be entirely ignored.

This feature, if you're using it, is further explained at


I do not have Sentry enabled.  At least I don't have that option under the Configuration menu.


I don't have "Enable Sentry Synchronization" enabled either.


And it is only one path on HDFS that is having these issues.  No other path on HDFS is having this issue.


Any ideas?


Thank you for the help thus far.

Could you please run and pass the output of the following commands, all run from the same shell session?

hadoop fs -ls -d /
hadoop fs -ls -d /app
hadoop fs -ls -d /app/drop
hadoop fs -getfacl /app/drop

Additionally, on the NameNode host, could you post the output by running the below command as-is?

grep -F authorization.provider -A1 $(ls -rtd /var/run/cloudera-scm-agent/process/*-NAMENODE | tail -1)/hdfs-site.xml


Thank you.  Here is the result:


# hadoop fs -ls -d /
drwxr-xr-x - hdfs supergroup 0 2017-03-15 11:45 /
# hadoop fs -ls -d /app/
drwxrwxrwx - appacnt appacnt 0 2017-03-28 23:46 /app
# hadoop fs -getfacl /app/drop
# file: /app/drop
# owner: appacnt
# group: appacnt



# grep -F authorization.provider -A1 $(ls -rtd /var/run/cloudera-scm-agent/process/*-NAMENODE | tail -1)/hdfs-site.xml



Permission of other will simply not change with any attempt.  Tried to kinit to other users, including the owner of that folder, but that had no effect.







Your cluster is running a custom authorization plugin inside the NameNode, which is likely controlling this directory specifically. You'll need to contact the authors of the "" module to gain more information on why this is done and how to change the permissions.

Sentry HDFS ACLs work in similar fashion (a Sentry HDFS Authz plugin is inserted via the same config you noticed above) and begins to ignore permissions being applied on the controlled paths like I'd described before, but in your case it seems like something locally engineered and configured.

I'd recommend contacting the developers of your plugin for more information, instead of removing it from your HDFS Configuration safety valves (which would resolve the issue, but its probably there for a reason).

New Contributor

Thanks! This help helps me too!