How to resolve "Permission denied" errors in CDH

by Cloudera Employee Clint on ‎01-12-2016 09:42 AM - edited on ‎09-27-2016 09:13 AM by Community Manager



"Permission denied" errors can present in a variety of use cases and from nearly any application that utilizes CDH.


For example, when attempting to start the jobtracker using this command


service hadoop-0.20-mapreduce-jobtracker start


 You may see this error, or one similar Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs( 


While the steps to reproduce this error can vary widely, the root causes are very well defined and you'll know you're suffering from this issue by finding the following line either on stdout or in the relevant log files: Permission denied: user=XXX, access=WRITE, inode="/someDirectory":hdfs:supergroup:drwxr-xr-x


Applies To


CDH (all versions), Mapreduce, HDFS, other services that rely on reading from or writing to HDFS




Access to the HDFS filesystem and/or permissions on certain directories are not correctly configured.


Troubleshooting Steps


There are several solutions to attempt:


1) The /user/ directory is owned by "hdfs" with 755 permissions. As a result only hdfs can write to that directory. Unlike unix/linux, hdfs is the superuser and not root. So you would need to do this:

sudo -u hdfs hadoop fs -mkdir /user/,,myfile,,
sudo -u hdfs hadoop fs -put myfile.txt /user/,,/,,

If you want to create a home directory for root so you can store files in his directory, do:

sudo -u hdfs hadoop fs -mkdir /user/root
sudo -u hdfs hadoop fs -chown root /user/root

Then as root you can do "hadoop fs -put file /user/root/".

2) You may also be getting denied on the network port where the NameNode is supposed to be listening:


Fix this by changing the address that the service is listening on in /etc/hadoop/conf/core-site.xml. By default your NameNode may be listening on "localhost:8020." (


So to be clear, implement this value for the following property:



then bounce the service with hadoop-hdfs-namenode restart
optional: validate with netstat -tupln | grep '8020' 



Disclaimer: The information contained in this article was generated by third-parties and not by Cloudera or it's personnel. Cloudera cannot guarantee its accuracy or efficacy. Cloudera disclaims all warranties of any kind and users of this information assume all risk associated with it and with following the advice or directions contained herein. By visiting this page, you agree to be bound by the Terms and Conditions of Site Usage , including all disclaimers and limitations contained therein.